Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functionality for performing Nearest Centroid (NC) Sampling. The NC sampling procedure was developed for forestry applications and selects plots for ground measurement so as to maximize the efficiency of imputation estimates. It uses multiple auxiliary variables and multivariate clustering to search for an optimal sample. Further details are given in Melville G. & Stone C. (2016) <doi:10.1080/00049158.2016.1218265>.
This package provides utility functions and custom probability distribution for Bayesian analyses of radiocarbon dates within the nimble modelling framework. It includes various population growth models, nimbleFunction objects, as well as a suite of functions for prior and posterior predictive checks for demographic inference (Crema and Shoda (2021) <doi:10.1371/journal.pone.0251695>) and other analyses.
This package provides routines for plotting linkage and association results along a chromosome, with marker names displayed along the top border. There are also routines for generating BED and BedGraph custom tracks for viewing in the UCSC genome browser. The data reformatting program Mega2 uses this package to plot output from a variety of programs.
Estimation of relatively complex nonlinear mixed-effects models, including the Sigmoidal Mixed Model and the Piecewise Linear Mixed Model with abrupt or smooth transition, through a single intuitive line of code and with automated generation of starting values.
An R interface to the Julia package NeuralEstimators.jl'. The package facilitates the user-friendly development of neural Bayes estimators, which are neural networks that map data to a point summary of the posterior distribution (Sainsbury-Dale et al., 2024, <doi:10.1080/00031305.2023.2249522>). These estimators are likelihood-free and amortised, in the sense that, once the neural networks are trained on simulated data, inference from observed data can be made in a fraction of the time required by conventional approaches. The package also supports amortised Bayesian or frequentist inference using neural networks that approximate the posterior or likelihood-to-evidence ratio (Zammit-Mangion et al., 2025, Sec. 3.2, 5.2, <doi:10.48550/arXiv.2404.12484>). The package accommodates any model for which simulation is feasible by allowing users to define models implicitly through simulated data.
This package provides functions for adaptive parallel tempering (APT) with NIMBLE models. Adapted from Lacki & Miasojedow (2016) <DOI:10.1007/s11222-015-9579-0> and Miasojedow, Moulines and Vihola (2013) <DOI:10.1080/10618600.2013.778779>.
This package provides a Non-Metric Space Library ('NMSLIB <https://github.com/nmslib/nmslib>) wrapper, which according to the authors "is an efficient cross-platform similarity search library and a toolkit for evaluation of similarity search methods. The goal of the NMSLIB <https://github.com/nmslib/nmslib> Library is to create an effective and comprehensive toolkit for searching in generic non-metric spaces. Being comprehensive is important, because no single method is likely to be sufficient in all cases. Also note that exact solutions are hardly efficient in high dimensions and/or non-metric spaces. Hence, the main focus is on approximate methods". The wrapper also includes Approximate Kernel k-Nearest-Neighbor functions based on the NMSLIB <https://github.com/nmslib/nmslib> Python Library.
Extends package nat (NeuroAnatomy Toolbox) by providing a collection of NBLAST-related functions for neuronal morphology comparison (Costa et al. (2016) <doi: 10.1016/j.neuron.2016.06.012>).
Computes and plots the boundary between night and day.
Semissupervised model for geographical document classification (Watanabe 2018) <doi:10.1080/21670811.2017.1293487>. This package currently contains seed dictionaries in English, German, French, Spanish, Italian, Russian, Hebrew, Arabic, Turkish, Japanese and Chinese (Simplified and Traditional).
Set of functions to estimate kidney function and other traits of interest in nephrology.
Various visual and numerical diagnosis methods for the nonlinear mixed effect model, including visual predictive checks, numerical predictive checks, and coverage plots (Karlsson and Holford, 2008, <https://www.page-meeting.org/?abstract=1434>).
This package implements the procedure from G. J. Ross (2021) - "Nonparametric Detection of Multiple Location-Scale Change Points via Wild Binary Segmentation" <arxiv:2107.01742>. This uses a version of Wild Binary Segmentation to detect multiple location-scale (i.e. mean and/or variance) change points in a sequence of univariate observations, with a strict control on the probability of incorrectly detecting a change point in a sequence which does not contain any.
This package provides a Modern and Flexible Neo4J Driver, allowing you to query data on a Neo4J server and handle the results in R. It's modern in the sense it provides a driver that can be easily integrated in a data analysis workflow, especially by providing an API working smoothly with other data analysis and graph packages. It's flexible in the way it returns the results, by trying to stay as close as possible to the way Neo4J returns data. That way, you have the control over the way you will compute the results. At the same time, the result is not too complex, so that the "heavy lifting" of data wrangling is not left to the user.
Nonparametric tests for clustered data in pre-post intervention design documented in Cui and Harrar (2021) <doi:10.1002/bimj.201900310> and Harrar and Cui (2022) <doi:10.1016/j.jspi.2022.05.009>. Other than the main test results mentioned in the reference paper, this package also provides a function to calculate the sample size allocations for the input long format data set, and also a function for adjusted/unadjusted confidence intervals calculations. There are also functions to visualize the distribution of data across different intervention groups over time, and also the adjusted/unadjusted confidence intervals.
We solve non linear least squares problems with optional equality and/or inequality constraints. Non linear iterations are globalized with back-tracking method. Linear problems are solved by dense QR decomposition from LAPACK which can limit the size of treated problems. On the other side, we avoid condition number degradation which happens in classical quadratic programming approach. Inequality constraints treatment on each non linear iteration is based on NNLS method (by Lawson and Hanson). We provide an original function lsi_ln for solving linear least squares problem with inequality constraints in least norm sens. Thus if Jacobian of the problem is rank deficient a solution still can be provided. However, truncation errors are probable in this case. Equality constraints are treated by using a basis of Null-space. User defined function calculating residuals must return a list having residual vector (not their squared sum) and Jacobian. If Jacobian is not in the returned list, package numDeriv is used to calculated finite difference version of Jacobian. The NLSIC method was fist published in Sokol et al. (2012) <doi:10.1093/bioinformatics/btr716>.
This package provides a nomogram, which can be carried out in rms package, provides a graphical explanation of a prediction process. However, it is not very easy to draw straight lines, read points and probabilities accurately. Even, it is hard for users to calculate total points and probabilities for all subjects. This package provides formula_rd() and formula_lp() functions to fit the formula of total points with raw data and linear predictors respectively by polynomial regression. Function points_cal() will help you calculate the total points. prob_cal() can be used to calculate the probabilities after lrm(), cph() or psm() regression. For more complex condition, interaction or restricted cubic spine, TotalPoints.rms() can be used.
This package provides functions for classifying sparseness in 2 x 2 categorical data where one or more cells have zero counts. The classification uses three widely applied summary measures: Risk Difference (RD), Relative Risk (RR), and Odds Ratio (OR). Helps in selecting suitable continuity corrections for zero cells in multi-centre or meta-analysis studies. Also supports sensitivity analysis and can detect phenomena such as Simpson's paradox. The methodology is based on Subbiah and Srinivasan (2008) <doi:10.1016/j.spl.2008.06.023>.
Estimates the relative transmission probabilities between cases in an infectious disease outbreak or cluster using naive Bayes. Included are various functions to use these probabilities to estimate transmission parameters such as the generation/serial interval and reproductive number as well as finding the contribution of covariates to the probabilities and visualizing results. The ideal use is for an infectious disease dataset with metadata on the majority of cases but more informative data such as contact tracing or pathogen whole genome sequencing on only a subset of cases. For a detailed description of the methods see Leavitt et al. (2020) <doi:10.1093/ije/dyaa031>.
The implementation of Markov Model Multiple Imputation with the application to COVID-19 scale, NIAID OS.
Omics data come in different forms: gene expression, methylation, copy number, protein measurements and more. NCutYX allows clustering of variables, of samples, and both variables and samples (biclustering), while incorporating the dependencies across multiple types of Omics data. (SJ Teran Hidalgo et al (2017), <doi:10.1186/s12864-017-3990-1>).
Cross-Entropy optimisation of unconstrained deterministic and noisy functions illustrated in Rubinstein and Kroese (2004, ISBN: 978-1-4419-1940-3) through a highly flexible and customisable function which allows user to define custom variable domains, sampling distributions, updating and smoothing rules, and stopping criteria. Several built-in methods and settings make the package very easy-to-use under standard optimisation problems.
This package provides tools to support research on vowel covariation. Methods are provided to support Principal Component Analysis workflows (as in Brand et al. (2021) <doi:10.1016/j.wocn.2021.101096> and Wilson Black et al. (2023) <doi:10.1515/lingvan-2022-0086>).
Get or set UNIX priority (niceness) of running R process.