Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions to perform paternity exclusion via allele matching, in autopolyploid species having ploidy 4, 6, or 8. The marker data used can be genotype data (copy numbers known) or allelic phenotype data (copy numbers not known).
Compilation and digitalization of the official registry of victims of state terrorism in Argentina during the last military coup. The original data comes from RUVTE-ILID (2019) <https://www.argentina.gob.ar/sitiosdememoria/ruvte/informe> and <http://basededatos.parquedelamemoria.org.ar/registros/>. The title, presentes, comes from present in spanish.
Three-dimensional systematic conservation planning, conducting nested prioritization analyses across multiple depth levels and ensuring efficient resource allocation throughout the water column. It provides a structured workflow designed to address biodiversity conservation and management challenges in the 3 dimensions, while facilitating usersâ choices and parameterization (Doxa et al. 2025 <doi:10.1016/j.ecolmodel.2024.110919>).
This function fits a reversible jump Bayesian piecewise exponential model that also includes the intensity of each event considered along with the rate of events.
This package implements the Phylogeny-Guided Microbiome OTU-Specific Association Test method, which boosts the testing power by adaptively borrowing information from phylogenetically close OTUs (operational taxonomic units) of the target OTU. This method is built on a kernel machine regression framework and allows for flexible modeling of complex microbiome effects, adjustments for covariates, and can accommodate both continuous and binary outcomes.
Probabilistic factor analysis for spatially-aware dimension reduction across multi-section spatial transcriptomics data with millions of spatial locations. More details can be referred to Wei Liu, et al. (2023) <doi:10.1101/2023.07.11.548486>.
This package provides a comprehensive framework for model fitting and simulation of drug release kinetics, pharmacokinetics (PK), and pharmacodynamics (PD). The package implements widely used mechanistic and empirical models for in vitro drug release, including zero-order, first-order, Higuchi, Korsmeyer-Peppas, Hixson-Crowell, and Weibull models. Pharmacokinetic functionality includes linear and nonlinear functions for one- and two-compartment models for intravenous bolus and oral administration, Michaelis-Menten kinetics, and non-compartmental analysis (NCA). Pharmacodynamic and dose-response modeling is supported through Emax-based models, including stimulatory (sigmoid Emax) and inhibitory (sigmoid Imax) Hill models, four- and five-parameter logistic models, as well as median toxic dose (TD50) and lethal dose (LD50) models. The package is intended to support parameter estimation, simulation, and model comparison in pharmaceutical research, drug development, and pharmacometrics education. For more details, see Gabrielsson & Weiner (2000) <ISBN:9186274929>, Holford & Sheiner (1981) <doi:10.2165/00003088-198106060-00002>, and Manlapaz (2025) <doi:10.32614/CRAN.package.adsoRptionCMF>.
User friendly functions for power and sample size analysis at one-way and two-way ANOVA settings take either effect size or delta and sigma as arguments. They are designed for both one-way and two-way ANOVA settings. In addition, a function for plotting power curves is available for power comparison, which can be easily visualized by statisticians and clinical researchers.
Data and statistics of Pakistan Social and Living Standards Measurement (PSLM) survey 2014-15 from Pakistan Bureau of Statistics (<http://www.pbs.gov.pk/>).
This package performs Bayesian arm-based network meta-analysis for datasets with binary, continuous, and count outcomes (Zhang et al., 2014 <doi:10.1177/1740774513498322>; Lin et al., 2017 <doi:10.18637/jss.v080.i05>).
Price volatility refers to the degree of variation in series over a certain period of time. This volatility is especially noticeable in agricultural commodities, adding uncertainty for farmers, traders, and others in the agricultural supply chain. Commonly and popularly used four volatility models viz, GARCH, Glosten Jagannatan Runkle-GARCH (GJR-GARCH) model, exponentially weighted moving average (EWMA) model and Multiplicative Error Model (MEM) are selected and implemented. PWAVE, weighted ensemble model based on particle swarm optimization (PSO) is proposed to combine the forecast obtained from all the candidate models. This package has been developed using algorithm of Paul et al. <doi:10.1007/s40009-023-01218-x> and Yeasin and Paul (2024) <doi:10.1007/s11227-023-05542-3>.
Perform 1-dim/2-dim projection pursuit, grand tour and guided tour for big data based on data nuggets. Reference papers: [1] Beavers et al., (2024) <doi:10.1080/10618600.2024.2341896>. [2] Duan, Y., Cabrera, J., & Emir, B. (2023). "A New Projection Pursuit Index for Big Data." <doi:10.48550/arXiv.2312.06465>.
Store and retrieve data from options() using syntax derived from the here package. potions makes it straightforward to update and retrieve options, either in the workspace or during package development, without overwriting global options.
Management problems of deterministic and stochastic projects. It obtains the duration of a project and the appropriate slack for each activity in a deterministic context. In addition it obtains a schedule of activities time (Castro, Gómez & Tejada (2007) <doi:10.1016/j.orl.2007.01.003>). It also allows the management of resources. When the project is done, and the actual duration for each activity is known, then it can know how long the project is delayed and make a fair delivery of the delay between each activity (Bergantiños, Valencia-Toledo & Vidal-Puga (2018) <doi:10.1016/j.dam.2017.08.012>). In a stochastic context it can estimate the average duration of the project and plot the density of this duration, as well as, the density of the early and last times of the chosen activities. As in the deterministic case, it can make a distribution of the delay generated by observing the project already carried out.
Create and customize interactive phylogenetic trees using the phylocanvas JavaScript library and the htmlwidgets package. These trees can be used directly from the R console, from RStudio', in Shiny apps, and in R Markdown documents. See <http://phylocanvas.org/> for more information on the phylocanvas library.
We extend two general methods of moment estimators to panel vector autoregression models (PVAR) with p lags of endogenous variables, predetermined and strictly exogenous variables. This general PVAR model contains the first difference GMM estimator by Holtz-Eakin et al. (1988) <doi:10.2307/1913103>, Arellano and Bond (1991) <doi:10.2307/2297968> and the system GMM estimator by Blundell and Bond (1998) <doi:10.1016/S0304-4076(98)00009-8>. We also provide specification tests (Hansen overidentification test, lag selection criterion and stability test of the PVAR polynomial) and classical structural analysis for PVAR models such as orthogonal and generalized impulse response functions, bootstrapped confidence intervals for impulse response analysis and forecast error variance decompositions.
Calculate POTH for treatment hierarchies from frequentist and Bayesian network meta-analysis. POTH quantifies the certainty in a treatment hierarchy. Subset POTH, POTH residuals, and best k treatments POTH can also be calculated to improve interpretation of treatment hierarchies.
Deduplicates datasets by retaining the most complete and informative records. Identifies duplicated entries based on a specified key column, calculates completeness scores for each row, and compares values within groups. When differences between duplicates exceed a user-defined threshold, records are split into unique IDs; otherwise, they are coalesced into a single, most complete entry. Returns a list containing the original duplicates, the split entries, and the final coalesced dataset. Useful for cleaning survey or administrative data where duplicated IDs may reflect minor data entry inconsistencies.
Support Vector Machine (SVM) classification with simultaneous feature selection using penalty functions is implemented. The smoothly clipped absolute deviation (SCAD), L1-norm', Elastic Net ('L1-norm and L2-norm') and Elastic SCAD (SCAD and L2-norm') penalties are available. The tuning parameters can be found using either a fixed grid or a interval search.
Pattern Sequence Based Forecasting (PSF) takes univariate time series data as input and assist to forecast its future values. This algorithm forecasts the behavior of time series based on similarity of pattern sequences. Initially, clustering is done with the labeling of samples from database. The labels associated with samples are then used for forecasting the future behaviour of time series data. The further technical details and references regarding PSF are discussed in Vignette.
Build piecewise exponential survival model for study design (planning) and event/timeline prediction.
Check compliance of event-data from (business) processes with respect to specified rules. Rules supported are of three types: frequency (activities that should (not) happen x number of times), order (succession between activities) and exclusiveness (and and exclusive choice between activities).
Evaluates the strength of a surrogate marker by estimating the proportion of treatment effect explained (PTE) and relative power(RP) for the optimally-transformed version of the surrogate. Details available in Wang et al (2022) <arXiv:2209.08414>.
This package provides tools for anonymizing sensitive patient and research data. Helps protect privacy while keeping data useful for analysis. Anonymizes IDs, names, dates, locations, and ages while maintaining referential integrity. Methods based on: Sweeney (2002) <doi:10.1142/S0218488502001648>, Dwork et al. (2006) <doi:10.1007/11681878_14>, El Emam et al. (2011) <doi:10.1371/journal.pone.0028071>, Fung et al. (2010) <doi:10.1145/1749603.1749605>.