Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides tools for working with the National Hydrography Dataset, with functions for querying, downloading, and networking both the NHD <https://www.usgs.gov/national-hydrography> and NHDPlus <https://www.epa.gov/waterdata/nhdplus-national-hydrography-dataset-plus> datasets.
This package performs nonparametric analysis of longitudinal data in factorial experiments. Longitudinal data are those which are collected from the same subjects over time, and they frequently arise in biological sciences. Nonparametric methods do not require distributional assumptions, and are applicable to a variety of data types (continuous, discrete, purely ordinal, and dichotomous). Such methods are also robust with respect to outliers and for small sample sizes.
Constructs (non)additive genetic relationship matrices, and their inverses, from a pedigree to be used in linear mixed effect models (A.K.A. the animal model'). Also includes other functions to facilitate the use of animal models. Some functions have been created to be used in conjunction with the R package asreml for the ASReml software, which can be obtained upon purchase from VSN international (<https://vsni.co.uk/software/asreml>).
Cross-Entropy optimisation of unconstrained deterministic and noisy functions illustrated in Rubinstein and Kroese (2004, ISBN: 978-1-4419-1940-3) through a highly flexible and customisable function which allows user to define custom variable domains, sampling distributions, updating and smoothing rules, and stopping criteria. Several built-in methods and settings make the package very easy-to-use under standard optimisation problems.
This package implements methods for centrality related analyses of networks. While the package includes the possibility to build more than 20 indices, its main focus lies on index-free assessment of centrality via partial rankings obtained by neighborhood-inclusion or positional dominance. These partial rankings can be analyzed with different methods, including probabilistic methods like computing expected node ranks and relative rank probabilities (how likely is it that a node is more central than another?). The methodology is described in depth in the vignettes and in Schoch (2018) <doi:10.1016/j.socnet.2017.12.003>.
This package provides functions for classifying sparseness in 2 x 2 categorical data where one or more cells have zero counts. The classification uses three widely applied summary measures: Risk Difference (RD), Relative Risk (RR), and Odds Ratio (OR). Helps in selecting suitable continuity corrections for zero cells in multi-centre or meta-analysis studies. Also supports sensitivity analysis and can detect phenomena such as Simpson's paradox. The methodology is based on Subbiah and Srinivasan (2008) <doi:10.1016/j.spl.2008.06.023>.
Motivated by changing administrative boundaries over time, the nuts package can convert European regional data with NUTS codes between versions (2006, 2010, 2013, 2016 and 2021) and levels (NUTS 1, NUTS 2 and NUTS 3). The package uses spatial interpolation as in Lam (1983) <doi:10.1559/152304083783914958> based on granular (100m x 100m) area, population and land use data provided by the European Commission's Joint Research Center.
We developed a comprehensive tool that helps with visualization and analysis of networks with the same variables across multiple factor levels. The netShiny contains most of the popular network features such as centrality measures, modularity, and other summary statistics (e.g. clustering coefficient). It also contains known tools to look at the (dis)similarities between two networks, such as pairwise distance measures between networks, set operations on the nodes of the networks, distribution of the weights of the edges and a network representing the difference between two correlation matrices. The package netShiny also contains tools to perform bootstrapping and find clusters in networks. See the netShiny manual for more information, documentation and examples.
This package provides a suite of tools that can assist in enhancing the processing efficiency of SQL and R scripts. - The libr_unused() retrieves a vector of package names that are called within an R script but are never actually used in the script. - The libr_used() retrieves a vector of package names actively utilized within an R script; packages loaded using library() but not actually used in the script will not be included. - The libr_called() retrieves a vector of all package names which are called within an R script. - nolock() appends WITH (nolock) to all tables in SQL queries. This facilitates reading from databases in scenarios where non-blocking reads are preferable, such as in high-transaction environments.
This package provides functions to fit linear mixed models based on convolutions of the generalized Laplace (GL) distribution. The GL mixed-effects model includes four special cases with normal random effects and normal errors (NN), normal random effects and Laplace errors (NL), Laplace random effects and normal errors (LN), and Laplace random effects and Laplace errors (LL). The methods are described in Geraci and Farcomeni (2020, Statistical Methods in Medical Research) <doi:10.1177/0962280220903763>.
Enables users to retrieve data, meta-data, and codebooks from <https://nettskjema.no/>. The data from the API is richer than from the online data portal. This package is not developed by the University of Oslo IT. Mowinckel (2021) <doi:10.5281/zenodo.4745481>.
The Dirichlet (aka NBD-Dirichlet) model describes the purchase incidence and brand choice of consumer products. We estimate the model and summarize various theoretical quantities of interest to marketing researchers. Also provides functions for making tables that compare observed and theoretical statistics.
An implementation of the nodiv algorithm, see Borregaard, M.K., Rahbek, C., Fjeldsaa, J., Parra, J.L., Whittaker, R.J. & Graham, C.H. 2014. Node-based analysis of species distributions. Methods in Ecology and Evolution 5(11): 1225-1235. <DOI:10.1111/2041-210X.12283>. Package for phylogenetic analysis of species distributions. The main function goes through each node in the phylogeny, compares the distributions of the two descendant nodes, and compares the result to a null model. This highlights nodes where major distributional divergence have occurred. The distributional divergence for these nodes is mapped.
This package provides a navigation menu to enable pipe-friendly data processing for hierarchical data structures. By activating the menu items, you can perform operations on each item while maintaining the overall structure in attributes.
Fit multinomial logistic regression with a penalty on the nuclear norm of the estimated regression coefficient matrix, using proximal gradient descent.
Creation and selection of N-way Partial Least Squares (NPLS) models. Selection of the optimal number of components can be done using ncrossreg(). NPLS was originally described by Rasmus Bro, see <doi:10.1002/%28SICI%291099-128X%28199601%2910%3A1%3C47%3A%3AAID-CEM400%3E3.0.CO%3B2-C>.
Common ecological distributions for nimble models in the form of nimbleFunction objects. Includes Cormack-Jolly-Seber, occupancy, dynamic occupancy, hidden Markov, dynamic hidden Markov, and N-mixture models. (Jolly (1965) <DOI: 10.2307/2333826>, Seber (1965) <DOI: 10.2307/2333827>, Turek et al. (2016) <doi:10.1007/s10651-016-0353-z>).
Classify occurrence records based on confidence levels of species identification. In addition, implement tools to filter occurrences inside grid cells and to manually check for possibles errors with an interactive shiny application.
Function and data sets in the book entitled "Nonlinear Time Series Analysis with R Applications" B.Guris (2020). The book will be published in Turkish and the original name of this book will be "R Uygulamali Dogrusal Olmayan Zaman Serileri Analizi". It is possible to perform nonlinearity tests, nonlinear unit root tests, nonlinear cointegration tests and estimate nonlinear error correction models by using the functions written in this package. The Momentum Threshold Autoregressive (MTAR), the Smooth Threshold Autoregressive (STAR) and the Self Exciting Threshold Autoregressive (SETAR) type unit root tests can be performed using the functions written. In addition, cointegration tests using the Momentum Threshold Autoregressive (MTAR), the Smooth Threshold Autoregressive (STAR) and the Self Exciting Threshold Autoregressive (SETAR) models can be applied. It is possible to estimate nonlinear error correction models. The Granger causality test performed using nonlinear models can also be applied.
It provides ensemble capabilities to supervised and unsupervised learning models predictions without using training labels. It decides the relative weights of the different models predictions by using best models predictions as response variable and rest of the mo. User can decide the best model, therefore, It provides freedom to user to ensemble models based on their design solutions.
R interface for the netstat command line utility used to retrieve and parse commonly used network statistics, including available and in-use transmission control protocol (TCP) ports. Primers offering technical background information on the netstat command line utility are available in the "Linux System Administrator's Manual" by Michael Kerrisk (2014) <https://man7.org/linux/man-pages/man8/netstat.8.html>, and on the Microsoft website (2017) <https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/netstat>.
This package provides a set of techniques that can be used to develop, validate, and implement automated classifiers. A powerful tool for transforming raw data into meaningful information, ncodeR (Shaffer, D. W. (2017) Quantitative Ethnography. ISBN: 0578191687) is designed specifically for working with big data: large document collections, logfiles, and other text data.
Three distinct methods are implemented for evaluating the sums of arbitrary negative binomial distributions. These methods are: Furman's exact probability mass function (Furman (2007) <doi:10.1016/j.spl.2006.06.007>), saddlepoint approximation, and a method of moments approximation. Functions are provided to calculate the density function, the distribution function and the quantile function of the convolutions in question given said evaluation methods. Functions for generating random deviates from negative binomial convolutions and for directly calculating the mean, variance, skewness, and excess kurtosis of said convolutions are also provided.
Closed testing has been proved powerful for true discovery guarantee. The computation of closed testing is, however, quite burdensome. A general way to reduce computational complexity is to combine partial closed testings for some prespecified feature sets of interest. Partial closed testings are performed at Bonferroni-corrected alpha level to guarantee the lower bounds for the number of true discoveries in prespecified sets are simultaneously valid. For any post hoc chosen sets of interest, coherence property is used to get the lower bound. In this package, we implement closed testing with globaltest to calculate the lower bound for number of true discoveries, see Ningning Xu et.al (2021) <arXiv:2001.01541> for detailed description.