Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Designed to be compatible with the R package DBI (Database Interface) when connecting to Amazon Web Service ('AWS') Athena <https://aws.amazon.com/athena/>. To do this the R AWS Software Development Kit ('SDK') paws <https://github.com/paws-r/paws> is used as a driver.
This package provides a flexible statistical framework for network-valued data analysis. It leverages the complexity of the space of distributions on graphs by using the permutation framework for inference as implemented in the flipr package. Currently, only the two-sample testing problem is covered and generalization to k samples and regression will be added in the future as well. It is a 4-step procedure where the user chooses a suitable representation of the networks, a suitable metric to embed the representation into a metric space, one or more test statistics to target specific aspects of the distributions to be compared and a formula to compute the permutation p-value. Two types of inference are provided: a global test answering whether there is a difference between the distributions that generated the two samples and a local test for localizing differences on the network structure. The latter is assumed to be shared by all networks of both samples. References: Lovato, I., Pini, A., Stamm, A., Vantini, S. (2020) "Model-free two-sample test for network-valued data" <doi:10.1016/j.csda.2019.106896>; Lovato, I., Pini, A., Stamm, A., Taquet, M., Vantini, S. (2021) "Multiscale null hypothesis testing for network-valued data: Analysis of brain networks of patients with autism" <doi:10.1111/rssc.12463>.
We developed a comprehensive tool that helps with visualization and analysis of networks with the same variables across multiple factor levels. The netShiny contains most of the popular network features such as centrality measures, modularity, and other summary statistics (e.g. clustering coefficient). It also contains known tools to look at the (dis)similarities between two networks, such as pairwise distance measures between networks, set operations on the nodes of the networks, distribution of the weights of the edges and a network representing the difference between two correlation matrices. The package netShiny also contains tools to perform bootstrapping and find clusters in networks. See the netShiny manual for more information, documentation and examples.
This package provides a system for writing hierarchical statistical models largely compatible with BUGS and JAGS', writing nimbleFunctions to operate models and do basic R-style math, and compiling both models and nimbleFunctions via custom-generated C++. NIMBLE includes default methods for MCMC, Laplace Approximation, Monte Carlo Expectation Maximization, and some other tools. The nimbleFunction system makes it easy to do things like implement new MCMC samplers from R, customize the assignment of samplers to different parts of a model from R, and compile the new samplers automatically via C++ alongside the samplers NIMBLE provides. NIMBLE extends the BUGS'/'JAGS language by making it extensible: New distributions and functions can be added, including as calls to external compiled code. Although most people think of MCMC as the main goal of the BUGS'/'JAGS language for writing models, one can use NIMBLE for writing arbitrary other kinds of model-generic algorithms as well. A full User Manual is available at <https://r-nimble.org>.
With this package, it is possible to compute nonparametric simultaneous confidence intervals for relative contrast effects in the unbalanced one way layout. Moreover, it computes simultaneous p-values. The simultaneous confidence intervals can be computed using multivariate normal distribution, multivariate t-distribution with a Satterthwaite Approximation of the degree of freedom or using multivariate range preserving transformations with Logit or Probit as transformation function. 2 sample comparisons can be performed with the same methods described above. There is no assumption on the underlying distribution function, only that the data have to be at least ordinal numbers. See Konietschke et al. (2015) <doi:10.18637/jss.v064.i09> for details.
Extracts team records/schedules and player statistics for the 2020-2025 National Collegiate Athletic Association (NCAA) women's and men's divisions I, II, and III volleyball teams from <https://stats.ncaa.org>. Functions can aggregate statistics for teams, conferences, divisions, or custom groups of teams.
Variational Expectation-Maximization algorithm to fit the noisy stochastic block model to an observed dense graph and to perform a node clustering. Moreover, a graph inference procedure to recover the underlying binary graph. This procedure comes with a control of the false discovery rate. The method is described in the article "Powerful graph inference with false discovery rate control" by T. Rebafka, E. Roquain, F. Villers (2020) <arXiv:1907.10176>.
This package provides a bootstrap method for Respondent-Driven Sampling (RDS) that relies on the underlying structure of the RDS network to estimate uncertainty.
Estimators and variance estimators tailored to the NILS hierarchical design (Adler et al. 2020, <https://res.slu.se/id/publ/105630>; Grafström et al. 2023, <https://res.slu.se/id/publ/128235>). The National Inventories of Landscapes in Sweden (NILS) is a long-term national monitoring program that collects, analyses and presents data on Swedish nature, covering both common and rare habitats <https://www.slu.se/om-slu/organisation/institutioner/skoglig-resurshushallning/miljoanalys/nils/>.
This package provides a collection of dynamic network data sets from various sources and multiple authors represented as networkDynamic'-formatted objects.
Neural decoding is method of analyzing neural data that uses a pattern classifiers to predict experimental conditions based on neural activity. NeuroDecodeR is a system of objects that makes it easy to run neural decoding analyses. For more information on neural decoding see Meyers & Kreiman (2011) <doi:10.7551/mitpress/8404.003.0024>.
Wald Test for nonlinear restrictions on model parameters and confidence intervals for nonlinear functions of parameters using delta-method. Applicable after ANY model, provided parameters estimates and their covariance matrix are available.
We proposed a package for the classification task which uses Negative Binomial distribution within Linear Discriminant Analysis (NBLDA). It is an extension of the PoiClaClu package to Negative Binomial distribution. The classification algorithms are based on the papers Dong et al. (2016, ISSN: 1471-2105) and Witten, DM (2011, ISSN: 1932-6157) for NBLDA and PLDA, respectively. Although PLDA is a sparse algorithm and can be used for variable selection, the algorithm proposed by Dong et al. is not sparse. Therefore, it uses all variables in the classifier. Here, we extend Dong et al.'s algorithm to the sparse case by shrinking overdispersion towards 0 (Yu et al., 2013, ISSN: 1367-4803) and offset parameter towards 1 (as proposed by Witten DM, 2011). We support only the classification task with this version.
This data package contains the Item Response Theory (IRT) parameters for the National Center for Education Statistics (NCES) items used on the National Assessment of Education Progress (NAEP) from 1990 to 2015. The values in these tables are used along with NAEP data to turn student item responses into scores and include information about item difficulty, discrimination, and guessing parameter for 3 parameter logit (3PL) items. Parameters for Generalized Partial Credit Model (GPCM) items are also included. The adjustments table contains the information regarding the treatment of items (e.g., deletion of an item or a collapsing of response categories), when these items did not appear to fit the item response models used to describe the NAEP data. Transformation constants change the score estimates that are obtained from the IRT scaling program to the NAEP reporting metric. Values from the years 2000 - 2013 were taken from the NCES website <https://nces.ed.gov/nationsreportcard/> and values from 1990 - 1998 and 2015 were extracted from their NAEP data files. All subtest names were reduced and homogenized to one word (e.g. "Reading to gain information" became "information"). The various subtest names for univariate transformation constants were all homogenized to "univariate".
This package provides a non-parametric test for multi-observer concordance and differences between concordances in (un)balanced data.
Replacement for nls() tools for working with nonlinear least squares problems. The calling structure is similar to, but much simpler than, that of the nls() function. Moreover, where nls() specifically does NOT deal with small or zero residual problems, nlmrt is quite happy to solve them. It also attempts to be more robust in finding solutions, thereby avoiding singular gradient messages that arise in the Gauss-Newton method within nls(). The Marquardt-Nash approach in nlmrt generally works more reliably to get a solution, though this may be one of a set of possibilities, and may also be statistically unsatisfactory. Added print and summary as of August 28, 2012.
R interface for the netstat command line utility used to retrieve and parse commonly used network statistics, including available and in-use transmission control protocol (TCP) ports. Primers offering technical background information on the netstat command line utility are available in the "Linux System Administrator's Manual" by Michael Kerrisk (2014) <https://man7.org/linux/man-pages/man8/netstat.8.html>, and on the Microsoft website (2017) <https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/netstat>.
An R wrapper for pulling data from the National Public Transport Access Nodes ('NaPTAN') API (<https://www.api.gov.uk/dft/national-public-transport-access-nodes-naptan-api/#national-public-transport-access-nodes-naptan-api>). This allows users to download NaPTAN transport information, for the full dataset, by ATCO region code, or by name of region.
This comprehensive toolkit provide a consistent and extensible framework for working with missing values in vectors. The companion package tidyimpute provides similar functionality for list-like and table-like structures). Functions exist for detection, removal, replacement, imputation, recollection, etc. of NAs'.
Optimizing regular numeric problems in optically stimulated luminescence dating, such as: equivalent dose calculation, dose rate determination, growth curve fitting, decay curve decomposition, statistical age model optimization, and statistical plot visualization.
This package provides functions to produce advanced ascii graphics, directly to the terminal window. This package utilizes the txtplot() function from the txtplot package, to produce text-based histograms, empirical cumulative distribution function plots, scatterplots with fitted and regression lines, quantile plots, density plots, image plots, and contour plots.
Nonparametric Failure Time (NFT) Bayesian Additive Regression Trees (BART): Time-to-event Machine Learning with Heteroskedastic Bayesian Additive Regression Trees (HBART) and Low Information Omnibus (LIO) Dirichlet Process Mixtures (DPM). An NFT BART model is of the form Y = mu + f(x) + sd(x) E where functions f and sd have BART and HBART priors, respectively, while E is a nonparametric error distribution due to a DPM LIO prior hierarchy. See the following for a description of the model at <doi:10.1111/biom.13857>.
Multiple and generalized nonparametric regression using smoothing spline ANOVA models and generalized additive models, as described in Helwig (2020) <doi:10.4135/9781526421036885885>. Includes support for Gaussian and non-Gaussian responses, smoothers for multiple types of predictors (including random intercepts), interactions between smoothers of mixed types, eight different methods for smoothing parameter selection, and flexible tools for diagnostics, inference, and prediction.
Minimize a differentiable function subject to all the variables being non-negative (i.e. >= 0), using a Conjugate-Gradient algorithm based on a modified Polak-Ribiere-Polyak formula as described in (Li, Can, 2013, <https://www.hindawi.com/journals/jam/2013/986317/abs/>).