Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel search send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a function for fitting various penalized Bayesian cumulative link ordinal response models when the number of parameters exceeds the sample size. These models have been described in Zhang and Archer (2021) <doi:10.1186/s12859-021-04432-w>.
This package provides a modified version of alternating logistic regressions (ALR) with estimation based on orthogonalized residuals (ORTH) is implemented, which use paired estimating equations to jointly estimate parameters in marginal mean and within-association models. The within-cluster association between ordinal responses is modeled by global pairwise odds ratios (POR). A finite-sample bias correction is provided to POR parameter estimates based on matrix multiplicative adjusted orthogonalized residuals (MMORTH) for correcting estimating equations, and different bias-corrected variance estimators such as BC1, BC2, and BC3.
This package provides tools to segment fire scars and assess severity and vegetation regeneration using Otsu thresholding on Relative Burn Ratio (RBR) and differenced Normalized Burn Ratio (dNBR) image composites. Includes support for mosaic handling, polygon metrics, post-fire regeneration detection, day-of-year flagging, and validation against reference datasets. Designed for analysis of fire history in the Iberian Peninsula. Input Landsat composites follow the methodology described in Quintero et al. (2025) <doi:10.2139/ssrn.4929831>.
This package provides functions to perform subspace clustering and classification.
Fits ordinal regression models with elastic net penalty. Supported model families include cumulative probability, stopping ratio, continuation ratio, and adjacent category. These families are a subset of vector glm's which belong to a model class we call the elementwise link multinomial-ordinal (ELMO) class. Each family in this class links a vector of covariates to a vector of class probabilities. Each of these families has a parallel form, which is appropriate for ordinal response data, as well as a nonparallel form that is appropriate for an unordered categorical response, or as a more flexible model for ordinal data. The parallel model has a single set of coefficients, whereas the nonparallel model has a set of coefficients for each response category except the baseline category. It is also possible to fit a model with both parallel and nonparallel terms, which we call the semi-parallel model. The semi-parallel model has the flexibility of the nonparallel model, but the elastic net penalty shrinks it toward the parallel model. For details, refer to Wurm, Hanlon, and Rathouz (2021) <doi:10.18637/jss.v099.i06>.
Data input/output functions for data that conform to the Digital Imaging and Communications in Medicine (DICOM) standard, part of the Rigorous Analytics bundle.
Allows users to download and analyze official data on Brazil's federal budget through the SPARQL endpoint provided by the Integrated Budget and Planning System ('SIOP'). This package enables access to detailed information on budget allocations and expenditures of the federal government, making it easier to analyze and visualize these data. Technical information on the Brazilian federal budget is available (Portuguese only) at <https://www1.siop.planejamento.gov.br/mto/>. The SIOP endpoint is available at <https://www1.siop.planejamento.gov.br/sparql/>.
Allows access to a proof-of-concept database containing Open Access species range models and relevant metadata. Access to the database is via both PostgreSQL connection and API <https://github.com/EnquistLab/Biendata-Frontend>, allowing diverse use-cases.
This package provides functions for the design process of survey sampling, with specific tools for multi-wave and multi-phase designs. Perform optimum allocation using Neyman (1934) <doi:10.2307/2342192> or Wright (2012) <doi:10.1080/00031305.2012.733679> allocation, split strata based on quantiles or values of known variables, randomly select samples from strata, allocate sampling waves iteratively, and organize a complex survey design. Also includes a Shiny application for observing the effects of different strata splits. A paper on this package was published in the Journal of Statistical Software <doi:10.18637/jss.v114.i10>.
Likelihood based optimal partitioning and indicator species analysis. Finding the best binary partition for each species based on model selection, with the possibility to take into account modifying/confounding variables as described in Kemencei et al. (2014) <doi:10.1556/ComEc.15.2014.2.6>. The package implements binary and multi-level response models, various measures of uncertainty, Lorenz-curve based thresholding, with native support for parallel computations.
Robust multi-criteria land-allocation optimization that explicitly accounts for the uncertainty of the indicators in the objective function. Solves the problem of allocating scarce land to various land-use options with regard to multiple, coequal indicators. The method aims to find the land allocation that represents the indicator composition with the best possible trade-off under uncertainty. optimLanduse includes the actual optimization procedure as described by Knoke et al. (2016) <doi:10.1038/ncomms11877> and the post-hoc calculation of the portfolio performance as presented by Gosling et al. (2020) <doi:10.1016/j.jenvman.2020.110248>.
This package provides a collection of numerical optimization algorithms. One is a simple implementation of the primitive grid search algorithm, the other is an extension of the simulated annealing algorithm that can take custom boundaries into account. The methodology for this bounded simulated annealing algorithm is due to Haario and Saksman (1991), <doi:10.2307/1427681>.
Aims to support all features of the system credential store, including non-portable ones. Supports Keychain on macOS', and Credential Manager on Windows'. See the keyring package if you need a portable API'.
Distance based bipartite matching using minimum cost flow, oriented to matching of treatment and control groups in observational studies ('Hansen and Klopfer 2006 <doi:10.1198/106186006X137047>). Routines are provided to generate distances from generalised linear models (propensity score matching), formulas giving variables on which to limit matched distances, stratified or exact matching directives, or calipers, alone or in combination.
Download and import of OpenStreetMap ('OSM') data as sf or sp objects. OSM data are extracted from the Overpass web server (<https://overpass-api.de/>) and processed with very fast C++ routines for return to R'.
Facilitates the creation of intuitive figures to describe metabolomics data by utilizing Kyoto Encyclopedia of Genes and Genomes (KEGG) hierarchy data, and gathers functional orthology and gene data from the KEGG-REST API.
An interface to easily run local language models with Ollama <https://ollama.com> server and API endpoints (see <https://github.com/ollama/ollama/blob/main/docs/api.md> for details). It lets you run open-source large language models locally on your machine.
Multiple tools are now available for inferring the personalised germ line set from an adaptive immune receptor repertoire. Output from these tools is converted to a single format and supplemented with rich data such as usage and characterisation of novel germ line alleles. This data can be particularly useful when considering the validity of novel inferences. Use of the analysis provided is described in <doi:10.3389/fimmu.2019.00435>.
This package provides a penalized regression framework that can simultaneously estimate the optimal treatment strategy and identify important variables. Appropriate for either censored or uncensored continuous response.
Compound deconvolution for chromatographic data, including gas chromatography - mass spectrometry (GC-MS) and comprehensive gas chromatography - mass spectrometry (GCxGC-MS). The package includes functions to perform independent component analysis - orthogonal signal deconvolution (ICA-OSD), independent component regression (ICR), multivariate curve resolution (MCR-ALS) and orthogonal signal deconvolution (OSD) alone.
This package provides a method that analyzes quality control metrics from multi-sample genomic sequencing studies and nominates poor quality samples for exclusion. Per sample quality control data are transformed into z-scores and aggregated. The distribution of aggregated z-scores are modelled using parametric distributions. The parameters of the optimal model, selected either by goodness-of-fit statistics or user-designation, are used for outlier nomination. Two implementations of the Cosine Similarity Outlier Detection algorithm are provided with flexible parameters for dataset customization.
The separate p-values of SNPs, RNA expressions and DNA methylations are calculated by KM regression. The correlation between different omics data are taken into account. This method can be applied to either samples with all three types of omics data or samples with two types.
Solves penalized least squares problems for big tall data using the orthogonalizing EM algorithm of Xiong et al. (2016) <doi:10.1080/00401706.2015.1054436>. The main fitting function is oem() and the functions cv.oem() and xval.oem() are for cross validation, the latter being an accelerated cross validation function for linear models. The big.oem() function allows for out of memory fitting. A description of the underlying methods and code interface is described in Huling and Chien (2022) <doi:10.18637/jss.v104.i06>.
High-performance implementation of 36 optimal binning algorithms (16 categorical, 20 numerical) for Weight of Evidence ('WoE') transformation, credit scoring, and risk modeling. Includes advanced methods such as Mixed Integer Linear Programming ('MILP'), Genetic Algorithms, Simulated Annealing, and Monotonic Regression. Features automatic method selection based on Information Value ('IV') maximization, strict monotonicity enforcement, and efficient handling of large datasets via Rcpp'. Fully integrated with the tidymodels ecosystem for building robust machine learning pipelines. Based on methods described in Siddiqi (2006) <doi:10.1002/9781119201731> and Navas-Palencia (2020) <doi:10.48550/arXiv.2001.08025>.