Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package implements methods for centrality related analyses of networks. While the package includes the possibility to build more than 20 indices, its main focus lies on index-free assessment of centrality via partial rankings obtained by neighborhood-inclusion or positional dominance. These partial rankings can be analyzed with different methods, including probabilistic methods like computing expected node ranks and relative rank probabilities (how likely is it that a node is more central than another?). The methodology is described in depth in the vignettes and in Schoch (2018) <doi:10.1016/j.socnet.2017.12.003>.
Commodity pricing models are (systems of) stochastic differential equations that are utilized for the valuation and hedging of commodity contingent claims (i.e. derivative products on the commodity) and other commodity related investments. Commodity pricing models that capture market dynamics are of great importance to commodity market participants in order to exercise sound investment and risk-management strategies. Parameters of commodity pricing models are estimated through maximum likelihood estimation, using available term structure futures data of a commodity. NFCP (n-factor commodity pricing) provides a framework for the modeling, parameter estimation, probabilistic forecasting, option valuation and simulation of commodity prices through state space and Monte Carlo methods, risk-neutral valuation and Kalman filtering. NFCP allows the commodity pricing model to consist of n correlated factors, with both random walk and mean-reverting elements. The n-factor commodity pricing model framework was first presented in the work of Cortazar and Naranjo (2006) <doi:10.1002/fut.20198>. Examples presented in NFCP replicate the two-factor crude oil commodity pricing model presented in the prolific work of Schwartz and Smith (2000) <doi:10.1287/mnsc.46.7.893.12034> with the approximate term structure futures data applied within this study provided in the NFCP package.
NEON observational data are provided via the NEON Data Portal <https://www.neonscience.org> and NEON API, and can be downloaded and reformatted by the neonUtilities package. NEON observational data (human-observed measurements, and analyses derived from human-collected samples, such as tree diameters and algal chemistry) are published in a format consisting of one or more tabular data files. This package provides tools for performing common operations on NEON observational data, including checking for duplicates and joining tables.
Neighbourhood functions are key components of local-search algorithms such as Simulated Annealing or Threshold Accepting. These functions take a solution and return a slightly-modified copy of it, i.e. a neighbour. The package provides a function neighbourfun() that constructs such neighbourhood functions, based on parameters such as admissible ranges for elements in a solution. Supported are numeric and logical solutions. The algorithms were originally created for portfolio-optimisation applications, but can be used for other models as well. Several recipes for neighbour computations are taken from "Numerical Methods and Optimization in Finance" by M. Gilli, D. Maringer and E. Schumann (2019, ISBN:978-0128150658).
This package provides a number series generator that creates number series items based on cognitive models.
Utilities for Natural Language Processing.
Imputation for both missing covariates and censored observations (optional) for survival data with missing covariates by the nearest neighbor based multiple imputation algorithm as described in Hsu et al. (2006) <doi:10.1002/sim.2452>, and Hsu and Yu (2018) <doi: 10.1177/0962280218772592>. Note that the current version can only impute for a situation with one missing covariate.
Simulate demand and attributes for ready to launch new products during their life cycle, or during their introduction and growth phases. You provide the number of products, attributes, time periods and/or other parameters and npdsim can simulate for you the demand for each product during the considered time periods, and the attributes of each product. The simulation for the demand is based on the idea that each product has a shape and a level, where the level is the cumulative demand over the considered time periods, and the shape is the normalized demand across those time periods.
With this package, it is possible to compute nonparametric simultaneous confidence intervals for relative contrast effects in the unbalanced one way layout. Moreover, it computes simultaneous p-values. The simultaneous confidence intervals can be computed using multivariate normal distribution, multivariate t-distribution with a Satterthwaite Approximation of the degree of freedom or using multivariate range preserving transformations with Logit or Probit as transformation function. 2 sample comparisons can be performed with the same methods described above. There is no assumption on the underlying distribution function, only that the data have to be at least ordinal numbers. See Konietschke et al. (2015) <doi:10.18637/jss.v064.i09> for details.
Given any graph, the node2vec algorithm can learn continuous feature representations for the nodes, which can then be used for various downstream machine learning tasks.The techniques are detailed in the paper "node2vec: Scalable Feature Learning for Networks" by Aditya Grover, Jure Leskovec(2016),available at <arXiv:1607.00653>.
Researchers often want to evaluate whether there is a negligible relationship among variables. The negligible package provides functions that are useful for conducting negligible effect testing (also called equivalence testing). For example, there are functions for evaluating the equivalence of means or the presence of a negligible association (correlation or regression). Beribisky, N., Mara, C., & Cribbie, R. A. (2020) <doi:10.20982/tqmp.16.4.p424>. Beribisky, N., Davidson, H., Cribbie, R. A. (2019) <doi:10.7717/peerj.6853>. Shiskina, T., Farmus, L., & Cribbie, R. A. (2018) <doi:10.20982/tqmp.14.3.p167>. Mara, C. & Cribbie, R. A. (2017) <doi:10.1080/00220973.2017.1301356>. Counsell, A. & Cribbie, R. A. (2015) <doi:10.1111/bmsp.12045>. van Wieringen, K. & Cribbie, R. A. (2014) <doi:10.1111/bmsp.12015>. Goertzen, J. R. & Cribbie, R. A. (2010) <doi:10.1348/000711009x475853>. Cribbie, R. A., Gruman, J. & Arpin-Cribbie, C. (2004) <doi:10.1002/jclp.10217>.
Nonparametric efficiency measurement and statistical inference via DEA type estimators (see Färe, Grosskopf, and Lovell (1994) <doi:10.1017/CBO9780511551710>, Kneip, Simar, and Wilson (2008) <doi:10.1017/S0266466608080651> and Badunenko and Mozharovskyi (2020) <doi:10.1080/01605682.2019.1599778>) as well as Stochastic Frontier estimators for both cross-sectional data and 1st, 2nd, and 4th generation models for panel data (see Kumbhakar and Lovell (2003) <doi:10.1017/CBO9781139174411>, Badunenko and Kumbhakar (2016) <doi:10.1016/j.ejor.2016.04.049>). The stochastic frontier estimators can handle both half-normal and truncated normal models with conditional mean and heteroskedasticity. The marginal effects of determinants can be obtained.
This package provides a non-parametric test for multi-observer concordance and differences between concordances in (un)balanced data.
Addressing crucial research questions often necessitates a small sample size due to factors such as distinctive target populations, rarity of the event under study, time and cost constraints, ethical concerns, or group-level unit of analysis. Many readily available analytic methods, however, do not accommodate small sample sizes, and the choice of the best method can be unclear. The npboottprm package enables the execution of nonparametric bootstrap tests with pooled resampling to help fill this gap. Grounded in the statistical methods for small sample size studies detailed in Dwivedi, Mallawaarachchi, and Alvarado (2017) <doi:10.1002/sim.7263>, the package facilitates a range of statistical tests, encompassing independent t-tests, paired t-tests, and one-way Analysis of Variance (ANOVA) F-tests. The nonparboot() function undertakes essential computations, yielding detailed outputs which include test statistics, effect sizes, confidence intervals, and bootstrap distributions. Further, npboottprm incorporates an interactive shiny web application, nonparboot_app(), offering intuitive, user-friendly data exploration.
National Statistical Office of Mongolia (NSO) is the national statistical service and an organization of Mongolian government. NSO provides open access to official data via its API <http://opendata.1212.mn/en/doc>. The package NSO1212 has functions for accessing the API service. The functions are compatible with the API v2.0 and get data sets and its detailed informations from the API.
Digital map data of Japan for choropleth mapping, including a circle cartogram.
This package provides methods and tools for forecasting univariate time series using the NARFIMA (Neural AutoRegressive Fractionally Integrated Moving Average) model. It combines neural networks with fractional differencing to capture both nonlinear patterns and long-term dependencies. The NARFIMA model supports seasonal adjustment, Box-Cox transformations, optional exogenous variables, and the computation of prediction intervals. In addition to the NARFIMA model, this package provides alternative forecasting models including NARIMA (Neural ARIMA), NBSTS (Neural Bayesian Structural Time Series), and NNaive (Neural Naive) for performance comparison across different modeling approaches. The methods are based on algorithms introduced by Chakraborty et al. (2025) <doi:10.48550/arXiv.2509.06697>.
This package provides a flexible statistical framework for network-valued data analysis. It leverages the complexity of the space of distributions on graphs by using the permutation framework for inference as implemented in the flipr package. Currently, only the two-sample testing problem is covered and generalization to k samples and regression will be added in the future as well. It is a 4-step procedure where the user chooses a suitable representation of the networks, a suitable metric to embed the representation into a metric space, one or more test statistics to target specific aspects of the distributions to be compared and a formula to compute the permutation p-value. Two types of inference are provided: a global test answering whether there is a difference between the distributions that generated the two samples and a local test for localizing differences on the network structure. The latter is assumed to be shared by all networks of both samples. References: Lovato, I., Pini, A., Stamm, A., Vantini, S. (2020) "Model-free two-sample test for network-valued data" <doi:10.1016/j.csda.2019.106896>; Lovato, I., Pini, A., Stamm, A., Taquet, M., Vantini, S. (2021) "Multiscale null hypothesis testing for network-valued data: Analysis of brain networks of patients with autism" <doi:10.1111/rssc.12463>.
The raw dataset and model used in Lai et al. (2021) Decoupled responses of native and exotic tree diversities to distance from old-growth forest and soil phosphorous in novel secondary forests. Applied Vegetation Science, 24, e12548.
This package provides a near drop-in replacement for base::Sys.sleep() that allows more types of input to produce delays in the execution of code and can silence/prevent typical sources of error.
Helps a clinical trial team discuss the clinical goals of a well-defined biomarker with a diagnostic, staging, prognostic, or predictive purpose. From this discussion will come a statistical plan for a (non-randomized) validation trial. Both prospective and retrospective trials are supported. In a specific focused discussion, investigators should determine the range of "discomfort" for the NNT, number needed to treat. The meaning of the discomfort range, [NNTlower, NNTupper], is that within this range most physicians would feel discomfort either in treating or withholding treatment. A pair of NNT values bracketing that range, NNTpos and NNTneg, become the targets of the study's design. If the trial can demonstrate that a positive biomarker test yields an NNT less than NNTlower, and that a negative biomarker test yields an NNT less than NNTlower, then the biomarker may be useful for patients. A highlight of the package is visualization of a "contra-Bayes" theorem, which produces criteria for retrospective case-controls studies.
Posterior sampling in several commonly used distributions using normalized power prior as described in Duan, Ye and Smith (2006) <doi:10.1002/env.752> and Ibrahim et.al. (2015) <doi:10.1002/sim.6728>. Sampling of the power parameter is achieved via either independence Metropolis-Hastings or random walk Metropolis-Hastings based on transformation.
Interface to gather news from the News API', based on a multilevel query <https://newsapi.org/>. A personal API key is required.
This package provides functions for normalizing psychometric test scores. The normalization aims at correcting the metrological properties of the psychometric tests such as the ceiling and floor effects and the curvilinearity (unequal interval scaling). Functions to compute and plot predictions in the natural scale of the psychometric test from the estimates of a linear mixed model estimated on the normalized scores are also provided. See Philipps et al (2014) <doi:10.1159/000365637> for details.