Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
An implementation of prediction intervals for overdispersed count data, for overdispersed binomial data and for linear random effects models.
In the situation when multiple alternative treatments or interventions available, different population groups may respond differently to different treatments. This package implements a method that discovers the population subgroups in which a certain treatment has a better effect than the other alternative treatments. This is done by first estimating the treatment effect for a given treatment and its uncertainty by computing random forests, and the resulting model is summarized by a decision tree in which the probabilities that the given treatment is best for a given subgroup is shown in the corresponding terminal node of the tree.
Calculate sample size or power for hierarchical endpoints. The package can handle any type of outcomes (binary, continuous, count, ordinal, time-to-event) and any number of such endpoints. It allows users to calculate sample size with a given power or to calculate power with a given sample size for hypothesis testing based on win ratios, win odds, net benefit, or DOOR (desirability of outcome ranking) as treatment effect between two groups for hierarchical endpoints. The methods of this package are described further in the paper by Barnhart, H. X. et al. (2024, <doi:10.1080/19466315.2024.2365629>).
An implementation of a non-parametric statistical model using a parallelised Monte Carlo sampling scheme. The method implemented in this package allows non-parametric inference to be regularized for small sample sizes, while also being more accurate than approximations such as variational Bayes. The concentration parameter is an effective sample size parameter, determining the faith we have in the model versus the data. When the concentration is low, the samples are close to the exact Bayesian logistic regression method; when the concentration is high, the samples are close to the simplified variational Bayes logistic regression. The method is described in full in the paper Lyddon, Walker, and Holmes (2018), "Nonparametric learning from Bayesian models with randomized objective functions" <arXiv:1806.11544>.
This package contains functions to calculate power and sample size for various study designs used in bioequivalence studies. Use known.designs() to see the designs supported. Power and sample size can be obtained based on different methods, amongst them prominently the TOST procedure (two one-sided t-tests). See README and NEWS for further information.
Global univariate minimization of Lipschitz functions is performed by using Pijavski method, which was published in Pijavski (1972) <DOI:10.1016/0041-5553(72)90115-2>.
Creation and selection of PARAllel FACtor Analysis (PARAFAC) models of longitudinal microbiome data. You can import your own data with our import functions or use one of the example datasets to create your own PARAFAC models. Selection of the optimal number of components can be done using assessModelQuality() and assessModelStability(). The selected model can then be plotted using plotPARAFACmodel(). The Parallel Factor Analysis method was originally described by Caroll and Chang (1970) <doi:10.1007/BF02310791> and Harshman (1970) <https://www.psychology.uwo.ca/faculty/harshman/wpppfac0.pdf>.
Algorithms and subroutines for patient-reported outcome data analysis.
The functions are designed to find the efficient mean-variance frontier or portfolio weights for static portfolio (called Markowitz portfolio) analysis in resource economics or nature conservation. Using the nonlinear programming solver ('Rsolnp'), this package deals with the quadratic minimization of the variance-covariances without shorting (i.e., non-negative portfolio weights) studied in Ando and Mallory (2012) <doi:10.1073/pnas.1114653109>. See the examples, testing versions, and more details from: <https://github.com/ysd2004/portn>.
Compute bending energies, principal warps, partial warp scores, and the non-affine component of shape variation for 2D landmark configurations, as well as Mardia-Dryden distributions and self-similar distributions of landmarks, as described in Mitteroecker et al. (2020) <doi:10.1093/sysbio/syaa007>. Working examples to decompose shape variation into small-scale and large-scale components, and to decompose the total shape variation into outline and residual shape components are provided. Two landmark datasets are provided, that quantify skull morphology in humans and papionin primates, respectively from Mitteroecker et al. (2020) <doi:10.5061/dryad.j6q573n8s> and Grunstra et al. (2020) <doi:10.5061/dryad.zkh189373>.
To find the certainty of dominance interactions with indirect interactions being considered.
This package implements an extension of the Chacko chi-square test for ordered vectors (Chacko, 1966, <https://www.jstor.org/stable/25051572>). Our extension brings the Chacko test to the computer age by implementing a permutation test to offer a numeric estimate of the p-value, which is particularly useful when the analytic solution is not available.
Projection Pursuit (PP) algorithm for dimension reduction based on Gaussian Mixture Models (GMMs) for density estimation using Genetic Algorithms (GAs) to maximise an approximated negentropy index. For more details see Scrucca and Serafini (2019) <doi:10.1080/10618600.2019.1598871>.
Use phenotype risk scores based on linked clinical and genetic data to study Mendelian disease and rare genetic variants. See Bastarache et al. 2018 <doi:10.1126/science.aal4043>.
Enables researchers to visualize the prediction performance of any algorithm on the individual level (or close to it), given that the predicted outcome is either binary or continuous. Visual results are instantly comprehensible.
This package provides functions to estimate the incubation period distribution of post-infectious syndrome which is defined as the time between the symptom onset of the antecedent infection and that of the post-infectious syndrome.
Fit linear splines to species time series to detect population growth scenarios based on Hyndman, R J and Mesgaran, M B and Cousens, R D (2015) <doi:10.1007/s10530-015-0962-8>.
Fits the Piecewise Exponential distribution with random time grids using the clustering structure of the Product Partition Models. Details of the implemented model can be found in Demarqui et al. (2008) <doi:10.1007/s10985-008-9086-0>.
Data sets and functions used in the polish book "Przewodnik po pakiecie R" (The Hitchhiker's Guide to the R). See more at <http://biecek.pl/R>. Among others you will find here data about housing prices, cancer patients, running times and many others.
This package implements the Bayesian hierarchical model described by Wheldon, Raftery, Clark and Gerland (see: <doi:10.1080/01621459.2012.737729>) for simultaneously estimating age-specific population counts, fertility rates, mortality rates and net international migration flows, at the national level.
This package provides functions to extract and handle commonly occurring principal phrases obtained from collections of texts. Major speed improvements - core functions rewritten in C++ for faster phrase-document parsing, clustering, and text distance computations. Based on, Small, E., & Cabrera, J. (2025). Principal phrase mining, an automated method for extracting meaningful phrases from text. International Journal of Computers and Applications, 47(1), 84รข 92.
An implementation of the one-step privacy-protecting method for estimating the overall and site-specific hazard ratios using inverse probability weighted Cox models in distributed data network studies, as proposed by Shu, Yoshida, Fireman, and Toh (2019) <doi: 10.1177/0962280219869742>. This method only requires sharing of summary-level riskset tables instead of individual-level data. Both the conventional inverse probability weights and the stabilized weights are implemented.
We provide several algorithms to compute the genotype ancestry scores (such as eigenvector projections) in the case where highly correlated individuals are involved.
This package provides data set and function for exploration of Multiple Indicator Cluster Survey (MICS) 2017-18 Men questionnaire data for Punjab, Pakistan. The results of the present survey are critically important for the purposes of Sustainable Development Goals (SDGs) monitoring, as the survey produces information on 32 global Sustainable Development Goals (SDGs) indicators. The data was collected from 53,840 households selected at the second stage with systematic random sampling out of a sample of 2,692 clusters selected using probability proportional to size sampling. Six questionnaires were used in the survey: (1) a household questionnaire to collect basic demographic information on all de jure household members (usual residents), the household, and the dwelling; (2) a water quality testing questionnaire administered in three households in each cluster of the sample; (3) a questionnaire for individual women administered in each household to all women age 15-49 years; (4) a questionnaire for individual men administered in every second household to all men age 15-49 years; (5) an under-5 questionnaire, administered to mothers (or caretakers) of all children under 5 living in the household; and (6) a questionnaire for children age 5-17 years, administered to the mother (or caretaker) of one randomly selected child age 5-17 years living in the household (<http://www.mics.unicef.org/surveys>).