Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides tools for analyzing spatial data, especially non- Gaussian areal data. The current version supports the sparse restricted spatial regression model of Hughes and Haran (2013) <DOI:10.1111/j.1467-9868.2012.01041.x>, the centered autologistic model of Caragea and Kaiser (2009) <DOI:10.1198/jabes.2009.07032>, and the Bayesian spatial filtering model of Hughes (2017) <arXiv:1706.04651>.
In the working paper titled "Why You Should Never Use the Hodrick-Prescott Filter", James D. Hamilton proposes a new alternative to economic time series filtering. The neverhpfilter package provides functions and data for reproducing his work. Hamilton (2017) <doi:10.3386/w23429>.
An implementation of the National Information Platforms for Nutrition or NiPN's analytic methods for assessing quality of anthropometric datasets that include measurements of weight, height or length, middle upper arm circumference, sex and age. The focus is on anthropometric status but many of the presented methods could be applied to other variables.
To study network evolution models and different blockmodeling approaches. Various functions enable generating (temporal) networks with a selected blockmodel type, taking into account selected local network mechanisms. The development of this package is financially supported the Slovenian Research Agency (www.arrs.gov.si) within the research program P5<96>0168 and the research project J5-2557 (Comparison and evaluation of different approaches to blockmodeling dynamic networks by simulations with application to Slovenian co-authorship networks).
Network Pre-Processing and normalization. Methods for normalizing graphs, including Chua normalization, Laplacian normalization, Binary magnification, min-max normalization and others. Methods to sparsify adjacency matrices. Methods for graph pre-processing and for filtering edges of the graph.
Interface to NatureServe (<https://www.natureserve.org/>). Includes methods to get data, image metadata, search taxonomic names, and make maps.
Nonparametric Tests for Main Effects, Simple Effects and Interaction Effect with Censored Data and Two Factorial Influencing Variables.
Fit and compare nonlinear mixed-effects models in differential equations with flexible dosing information commonly seen in pharmacokinetics and pharmacodynamics (Almquist, Leander, and Jirstrand 2015 <doi:10.1007/s10928-015-9409-1>). Differential equation solving is by compiled C code provided in the rxode2 package (Wang, Hallow, and James 2015 <doi:10.1002/psp4.12052>).
Regress network responses (both directed and undirected) onto covariates of interest that may be actor-, relation-, or network-valued. In addition, compute principled variance estimates of the coefficients assuming that the errors are jointly exchangeable. Missing data is accommodated. Additionally implements building and inversion of covariance matrices under joint exchangeability, and generates random covariance matrices from this class. For more detail on methods, see Marrs, Fosdick, and McCormick (2017) <arXiv:1701.05530>.
Collapse, partition, combine, fill gaps in and expand date/time ranges.
Simulation and estimation for Neyman-Scott spatial cluster point process models and their extensions, based on the methodology in Tanaka, Ogata, and Stoyan (2008) <doi:10.1002/bimj.200610339>. To estimate parameters by the simplex method, parallel computation using OpenMP application programming interface is available. For more details see Tanaka, Saga and Nakano <doi:10.18637/jss.v098.i06>.
This package provides gradient-based MCMC sampling algorithms for use with the MCMC engine provided by the nimble package. This includes two versions of Hamiltonian Monte Carlo (HMC) No-U-Turn (NUTS) sampling, and (under development) Langevin samplers. The `NUTS_classic` sampler implements the original HMC-NUTS algorithm as described in Hoffman and Gelman (2014) <doi:10.48550/arXiv.1111.4246>. The `NUTS` sampler is a modern version of HMC-NUTS sampling matching the HMC sampler available in version 2.32.2 of Stan (Stan Development Team, 2023). In addition, convenience functions are provided for generating and modifying MCMC configuration objects which employ HMC sampling.
This package provides a nomogram, which can be carried out in rms package, provides a graphical explanation of a prediction process. However, it is not very easy to draw straight lines, read points and probabilities accurately. Even, it is hard for users to calculate total points and probabilities for all subjects. This package provides formula_rd() and formula_lp() functions to fit the formula of total points with raw data and linear predictors respectively by polynomial regression. Function points_cal() will help you calculate the total points. prob_cal() can be used to calculate the probabilities after lrm(), cph() or psm() regression. For more complex condition, interaction or restricted cubic spine, TotalPoints.rms() can be used.
This package provides a collection of data structures that represent volumetric brain imaging data. The focus is on basic data handling for 3D and 4D neuroimaging data. In addition, there are function to read and write NIFTI files and limited support for reading AFNI files.
This package provides a suite of tools that can assist in enhancing the processing efficiency of SQL and R scripts. - The libr_unused() retrieves a vector of package names that are called within an R script but are never actually used in the script. - The libr_used() retrieves a vector of package names actively utilized within an R script; packages loaded using library() but not actually used in the script will not be included. - The libr_called() retrieves a vector of all package names which are called within an R script. - nolock() appends WITH (nolock) to all tables in SQL queries. This facilitates reading from databases in scenarios where non-blocking reads are preferable, such as in high-transaction environments.
Implementations of various control theory methods for use in brain and psychological networks. Contains controllability statistics from Pasqualetti, Zampieri & Bullo (2014) <doi:10.1109/TCNS.2014.2310254>, optimal control algorithms from Lewis, Vrabie & Syrmos (2012, ISBN:978-0-470-63349-6), and various utilities.
An interactive presentation on the topic of normal distribution using rmarkdown and shiny packages. It is helpful to those who want to learn normal distribution quickly and get a hands on experience. The presentation has a template for solving problems on normal distribution. Runtime examples are provided in the package function as well as at <https://kartikeyastat.shinyapps.io/NormalDistribution/>.
Uses a modified lifting algorithm on which it builds the nondecimated lifting transform. It has applications in wavelet shrinkage.
Models for non-linear time series analysis and causality detection. The main functionalities of this package consist of an implementation of the classical causality test (C.W.J.Granger 1980) <doi:10.1016/0165-1889(80)90069-X>, and a non-linear version of it based on feed-forward neural networks. This package contains also an implementation of the Transfer Entropy <doi:10.1103/PhysRevLett.85.461>, and the continuous Transfer Entropy using an approximation based on the k-nearest neighbors <doi:10.1103/PhysRevE.69.066138>. There are also some other useful tools, like the VARNN (Vector Auto-Regressive Neural Network) prediction model, the Augmented test of stationarity, and the discrete and continuous entropy and mutual information.
Computes the pdf, cdf, quantile function and generating random numbers for neutrosophic distributions. This family have been developed by different authors in the recent years. See Patro and Smarandache (2016) <doi:10.5281/zenodo.571153> and Rao et al (2023) <doi:10.5281/zenodo.7832786>.
Catalogue of NBER working papers published between June 1973 and December 2021.
Counts syllables in character vectors for English words. Imputes syllables as the number of vowel sequences for words not found.
This package provides a variety of functions for the best known and most innovative approaches to nonparametric boundary estimation. The selected methods are concerned with empirical, smoothed, unrestricted as well as constrained fits under both separate and multiple shape constraints. They cover robust approaches to outliers as well as data envelopment techniques based on piecewise polynomials, splines, local linear fitting, extreme values and kernel smoothing. The package also seamlessly allows for Monte Carlo comparisons among these different estimation methods. Its use is illustrated via a number of empirical applications and simulated examples.
An implementation of the Nonparametric Predictive Inference approach in R. It provides tools for quantifying uncertainty via lower and upper probabilities. It includes useful functions for pairwise and multiple comparisons: comparing two groups with and without terminated tails, selecting the best group, selecting the subset of best groups, selecting the subset including the best group.