Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Analytical methods to locate and characterise ecotones, ecosystems and environmental patchiness along ecological gradients. Methods are implemented for isolated sampling or for space/time series. It includes Detrended Correspondence Analysis (Hill & Gauch (1980) <doi:10.1007/BF00048870>), fuzzy clustering (De Cáceres et al. (2010) <doi:10.1080/01621459.1963.10500845>), biodiversity indices (Jost (2006) <doi:10.1111/j.2006.0030-1299.14714.x>), and network analyses (Epskamp et al. (2012) <doi:10.18637/jss.v048.i04>) - as well as tools to explore the number of clusters in the data. Functions to produce synthetic ecological datasets are also provided.
This package provides a robust and efficient solution for working with Ethiopian dates. It can seamlessly convert to and from Gregorian dates. It is designed to be compatible with the tidyverse data workflow, including plotting with ggplot2'. It ensures lightning-fast computations by integrating high-performance C++ code through Rcpp package.
Perform a Bayesian estimation of the exploratory reduced reparameterized unified model (ErRUM) described by Culpepper and Chen (2018) <doi:10.3102/1076998618791306>.
This package provides Some of the most important evaluation measures for evaluating a model. Just by giving the real and predicted class, measures such as accuracy, sensitivity, specificity, ppv, npv, fmeasure, mcc and ... will be returned.
This package provides functions that compute probabilistic excursion sets, contour credibility regions, contour avoiding regions, and simultaneous confidence bands for latent Gaussian random processes and fields. The package also contains functions that calculate these quantities for models estimated with the INLA package. The main references for excursions are Bolin and Lindgren (2015) <doi:10.1111/rssb.12055>, Bolin and Lindgren (2017) <doi:10.1080/10618600.2016.1228537>, and Bolin and Lindgren (2018) <doi:10.18637/jss.v086.i05>. These can be generated by the citation function in R.
Access data related to the European union from GISCO <https://ec.europa.eu/eurostat/web/gisco>, the Geographic Information System of the European Commission, via its rest API at <https://gisco-services.ec.europa.eu>. This package tries to make it easier to get these data into R.
This package provides tools for general properties including price, quantity, elasticity, convexity, marginal revenue and manifold of various economics demand systems including Linear, Translog, CES, LES and CREMR.
Computes the Extended Chen-Poisson (ecp) distribution, survival, density, hazard, cumulative hazard and quantile functions. It also allows to generate a pseudo-random sample from this distribution. The corresponding graphics are available. Functions to obtain measures of skewness and kurtosis, k-th raw moments, conditional k-th moments and mean residual life function were added. For details about ecp distribution, see Sousa-Ferreira, I., Abreu, A.M. & Rocha, C. (2023). <doi:10.57805/revstat.v21i2.405>.
The encompassing test is developed based on multi-step-ahead predictions of two nested models as in Pitarakis, J. (2023) <doi:10.48550/arXiv.2312.16099>. The statistics are standardised to a normal distribution, and the null hypothesis is that the larger model contains no additional useful information. P-values will be provided in the output.
It allows running EViews (<https://eviews.com>) program from R, R Markdown and Quarto documents. EViews (Econometric Views) is a statistical software for Econometric analysis. This package integrates EViews and R and also serves as an EViews Knit-Engine for knitr package. Write all your EViews commands in R, R Markdown or Quarto documents. For details, please consult our peer-review article Mati S., Civcir I. and Abba S.I (2023) <doi:10.32614/RJ-2023-045>.
Streamlines the fitting of common Bayesian item response models using Stan.
This package provides tools for working with iEEG matrix data, including downloading curated iEEG data from OSF (The Open Science Framework <https://osf.io/>) (EpochDownloader()), making new objects (Epoch()), processing (crop() and resample()), and visualizing the data (plot()).
Simulation of Electric Vehicles charging sessions using Gaussian models, together with time-series power demand calculations.
Error-driven learning (based on the Widrow & Hoff (1960)<https://isl.stanford.edu/~widrow/papers/c1960adaptiveswitching.pdf> learning rule, and essentially the same as Rescorla-Wagner's learning equations (Rescorla & Wagner, 1972, ISBN: 0390718017), which are also at the core of Naive Discrimination Learning, (Baayen et al, 2011, <doi:10.1037/a0023851>) can be used to explain bottom-up human learning (Hoppe et al, <doi:10.31234/osf.io/py5kd>), but is also at the core of artificial neural networks applications in the form of the Delta rule. This package provides a set of functions for building small-scale simulations to investigate the dynamics of error-driven learning and it's interaction with the structure of the input. For modeling error-driven learning using the Rescorla-Wagner equations the package ndl (Baayen et al, 2011, <doi:10.1037/a0023851>) is available on CRAN at <https://cran.r-project.org/package=ndl>. However, the package currently only allows tracing of a cue-outcome combination, rather than returning the learned networks. To fill this gap, we implemented a new package with a few functions that facilitate inspection of the networks for small error driven learning simulations. Note that our functions are not optimized for training large data sets (no parallel processing), as they are intended for small scale simulations and course examples. (Consider the python implementation pyndl <https://pyndl.readthedocs.io/en/latest/> for that purpose.).
Extract features from tabular data in a declarative fashion, with a focus on processing medical records. Features are specified as JSON and are independently processed before being joined. Input data can be provided as CSV files or as data frames. This setup ensures that data is transformed in a modular and reproducible manner, and allows the same pipeline to be easily applied to new data.
The EQ-5D is a widely-used standarized instrument for measuring Health Related Quality Of Life (HRQOL), developed by the EuroQol group <https://euroqol.org/>. It assesses five dimensions; mobility, self-care, usual activities, pain/discomfort, and anxiety/depression, using either a three-level (EQ-5D-3L) or five-level (EQ-5D-5L) scale. Scores from these dimensions are commonly converted into a single utility index using country-specific value sets, which are critical in clinical and economic evaluations of healthcare and in population health surveys. The eq5dsuite package enables users to calculate utility index values for the EQ-5D instruments, including crosswalk utilities using the original crosswalk developed by van Hout et al. (2012) <doi:10.1016/j.jval.2012.02.008> (mapping EQ-5D-5L responses to EQ-5D-3L index values), or the recently developed reverse crosswalk by van Hout et al. (2021) <doi:10.1016/j.jval.2021.03.009> (mapping EQ-5D-3L responses to EQ-5D-5L index values). Users are allowed to add and/or remove user-defined value sets. Additionally, the package provides tools to analyze EQ-5D data according to the recommended guidelines outlined in "Methods for Analyzing and Reporting EQ-5D data" by Devlin et al. (2020) <doi:10.1007/978-3-030-47622-9>.
This package provides a tool to operate a batch of univariate or multivariate Cox models and return tidy result.
This package implements estimation methods for parameters of common distribution families. The common d, p, q, r function family for each distribution is enriched with the ll, e, and v counterparts, computing the log-likelihood, performing estimation, and calculating the asymptotic variance - covariance matrix, respectively. Parameter estimation is performed analytically whenever possible.
This package provides various statistical methods for designing and analyzing randomized experiments. One functionality of the package is the implementation of randomized-block and matched-pair designs based on possibly multivariate pre-treatment covariates. The package also provides the tools to analyze various randomized experiments including cluster randomized experiments, two-stage randomized experiments, randomized experiments with noncompliance, and randomized experiments with missing data.
Streamlines common steps for working with animal tracking data, from raw telemetry points to summaries, interactive maps, and home range estimates. Designed to be beginner-friendly, it enables rapid exploration of spatial and movement data with minimal wrangling, providing a unified workflow for importing, summarizing, and visualizing, and analyzing animal movement datasets.
The cointegration based support vector regression model enables researchers to use data obtained from the cointegrating vector as input in the support vector regression model.
The EconDataverse is a universe of open-source packages to work seamlessly with economic data. This package is designed to make it easy to install and load multiple EconDataverse packages in a single step. Learn more about the EconDataverse at <https://www.econdataverse.org>.
The US EPA ECOTOX database is a freely available database with a treasure of aquatic and terrestrial ecotoxicological data. As the online search interface doesn't come with an API, this package provides the means to easily access and search the database in R. To this end, all raw tables are downloaded from the EPA website and stored in a local SQLite database <doi:10.1016/j.chemosphere.2024.143078>.
This package implements Escalation With Overdose Control trial designs using two drug combinations described by this paper <doi:10.1002/sim.6961>(Tighiouart et al., 2016). It calculates the recommended dose for next cohorts and perform simulations to obtain operating characteristics.