Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Extend Rasch and Item Response Theory (IRT) analyses by providing tools for post-processing the output from five major IRT packages (i.e., eRm', psychotools', ltm', mirt', and TAM'). The current version provides the plotPIccc() function, which extracts from the return object of the originating package all information required to draw an extended Person-Item-Map (PIccc), showing any combination of * category characteristic curves (CCCs), * threshold characteristic curves (TCCs), * item characteristic curves (ICCs), * category information functions (CIFs), * item information functions (IIFs), * test information function (TIF), and the * standard error curve (S.E.). for uni- and multidimensional models (as far as supported by each package). It allows for selecting dimensions, items, and categories to plot and offers numerous options to adapt the output. The return object contains all calculated values for further processing.
Somoclu is a massively parallel implementation of self-organizing maps. It exploits multicore CPUs and it can be accelerated by CUDA. The topology of the map can be planar or toroid and the grid of neurons can be rectangular or hexagonal . Details refer to (Peter Wittek, et al (2017)) <doi:10.18637/jss.v078.i09>.
Algorithms for the spatial stratification of landscapes, sampling and modeling of spatially-varying phenomena. These algorithms offer a simple framework for the stratification of geographic space based on raster layers representing landscape factors and/or factor scales. The stratification process follows a hierarchical approach, which is based on first level units (i.e., classification units) and second-level units (i.e., stratification units). Nonparametric techniques allow to measure the correspondence between the geographic space and the landscape configuration represented by the units. These correspondence metrics are useful to define sampling schemes and to model the spatial variability of environmental phenomena. The theoretical background of the algorithms and code examples are presented in Fuentes et al. (2022). <doi:10.32614/RJ-2022-036>.
Downloads and parses SDF (Structural Description Format) and PDB (Protein Database) files for 3D rendering.
We develop the entire solution paths for ROC-SVM presented by Rakotomamonjy. The ROC-SVM solution path algorithm greatly facilitates the tuning procedure for regularization parameter, lambda in ROC-SVM by avoiding grid search algorithm which may be computationally too intensive. For more information on the ROC-SVM, see the report in the ROC Analysis in AI workshop(ROCAI-2004) : Hernà ndez-Orallo, José, et al. (2004) <doi:10.1145/1046456.1046489>.
Pattern matching, extraction, replacement and other string processing operations using Google's RE2 <https://github.com/google/re2> regular-expression engine. Consistent interface (similar to stringr'). RE2 uses finite-automata based techniques, and offers a fast and safe alternative to backtracking regular-expression engines like those used in stringr', stringi and other PCRE implementations.
Random vectors, called rvecs. An rvec holds multiple draws, but tries to behave like a standard R vector, including working well in data frames. Rvecs are useful for analysing output from a simulation or a Bayesian analysis.
This package provides tools for working with rotational data, including simulation from the most commonly used distributions on SO(3), methods for different Bayes, mean and median type estimators for the central orientation of a sample, confidence/credible regions for the central orientation based on those estimators and a novel visualization technique for rotation data. Most recently, functions to identify potentially discordant (outlying) values have been added. References: Bingham, Melissa A. and Nordman, Dan J. and Vardeman, Steve B. (2009), Bingham, Melissa A and Vardeman, Stephen B and Nordman, Daniel J (2009), Bingham, Melissa A and Nordman, Daniel J and Vardeman, Stephen B (2010), Leon, C.A. and Masse, J.C. and Rivest, L.P. (2006), Hartley, R and Aftab, K and Trumpf, J. (2011), Stanfill, Bryan and Genschel, Ulrike and Hofmann, Heike (2013), Maonton, Jonathan (2004), Mardia, KV and Jupp, PE (2000, ISBN:9780471953333), Rancourt, D. and Rivest, L.P. and Asselin, J. (2000), Chang, Ted and Rivest, Louis-Paul (2001), Fisher, Nicholas I. (1996, ISBN:0521568900).
The goal of ralger is to facilitate web scraping in R.
This package provides efficient functions for detecting multiple change points in multidimensional time series. The models can be piecewise constant or polynomial. Adaptive threshold selection methods are available, see Fan and Wu (2024) <arXiv:2403.00600>.
R interface to CPLEX solvers for linear, quadratic, and (linear and quadratic) mixed integer programs. Support for quadratically constrained programming is available. See the file "INSTALL" for details on how to install the Rcplex package in Linux/Unix-like and Windows systems. Support for sparse matrices is provided by an S3-style class "simple_triplet_matrix" from package slam and by objects from the Matrix package class hierarchy.
This package provides functions to generate censored samples of type I, II and III, from any random sample generator. It also supplies the option to create left and right censorship. Along with this, the generation of samples with interval censoring is in the testing phase, with two options of fixed length intervals and random lengths.
Test Statistics for Independence in High-Dimensional Datasets. This package consists of two functions to perform the complete independence test based on test statistics proposed by Bulut (unpublished yet) and suggested by Najarzadeh (2021) <doi: 10.1080/03610926.2019.1702699>. The Bulut's statistic is not sensitive to outliers in high-dimensional data, unlike one of Najarzadeh (2021) <doi: 10.1080/03610926.2019.1702699>. So, the Bulut's statistic can be performed robustly by using RDnp function.
Simulation of phenotype / genotype data under assortative mating. Includes functions for generating Bahadur order-2 multivariate Bernoulli variables with general and diagonal-plus-low-rank correlation structures. Further details are provided in: Border and Malik (2022) <doi:10.1101/2022.10.13.512132>.
This package implements the fast iterative shrinkage-thresholding algorithm (FISTA) algorithm to fit a Gamma distribution with an elastic net penalty as described in Chen, Arakvin and Martin (2018) <doi:10.48550/arXiv.1804.07780>. An implementation for the case of the exponential distribution is also available, with details available in Chen and Martin (2018) <doi:10.2139/ssrn.3085672>.
This package implements the t-walk algorithm, a general-purpose, self-adjusting Markov Chain Monte Carlo (MCMC) sampler for continuous distributions as described by Christen & Fox (2010) <doi:10.1214/10-BA603>. The t-walk requires no tuning and is robust for a wide range of target distributions, including high-dimensional and multimodal problems. This implementation includes an option for running multiple chains in parallel to accelerate sampling and facilitate convergence diagnostics.
This package provides a set of tools for working with Romanian personal numeric codes. The core is a validation function which applies several verification criteria to assess the validity of numeric codes. This is accompanied by functionality for extracting the different components of a personal numeric code. A personal numeric code is issued to all Romanian residents either at birth or when they obtain a residence permit.
We provide a toolbox to fit and simulate a univariate or multivariate damped random walk process that is also known as an Ornstein-Uhlenbeck process or a continuous-time autoregressive model of the first order, i.e., CAR(1) or CARMA(1, 0). This process is suitable for analyzing univariate or multivariate time series data with irregularly-spaced observation times and heteroscedastic measurement errors. When it comes to the multivariate case, the number of data points (measurements/observations) available at each observation time does not need to be the same, and the length of each time series can vary. The number of time series data sets that can be modeled simultaneously is limited to ten in this version of the package. We use Kalman-filtering to evaluate the resulting likelihood function, which leads to a scalable and efficient computation in finding maximum likelihood estimates of the model parameters or in drawing their posterior samples. Please pay attention to loading the data if this package is used for astronomical data analyses; see the details in the manual. Also see Hu and Tak (2020) <arXiv:2005.08049>.
Inference of relatedness coefficients from a bi-allelic genotype matrix using a Maximum Likelihood estimation, Laporte, F., Charcosset, A. and Mary-Huard, T. (2017) <doi:10.1111/biom.12634>.
Makes easier the creation of R package or research compendium (i.e. a predefined files/folders structure) so that users can focus on the code/analysis instead of wasting time organizing files. A full ready-to-work structure is set up with some additional features: version control, remote repository creation, CI/CD configuration (check package integrity under several OS, test code with testthat', and build and deploy website using pkgdown'). This package heavily relies on the R packages devtools and usethis and follows recommendations made by Wickham H. (2015) <ISBN:9781491910597> and Marwick B. et al. (2018) <doi:10.7287/peerj.preprints.3192v2>.
An R interface to the SYMPHONY solver for mixed-integer linear programs.
The Diceware method can be used to generate strong passphrases. In short, you roll a 6-faced dice 5 times in a row, the number obtained is matched against a dictionary of easily remembered words. By combining together 7 words thus generated, you obtain a password that is relatively easy to remember, but would take several millions years (on average) for a powerful computer to guess.
Datasets with energy consumption data of different data measurement frequencies. The data stems from several publicly funded research projects of the Chair of Information Systems and Energy Efficient Systems at the University of Bamberg.
Compiling regression results into a publishable format, conducting post-hoc hypothesis testing, and plotting moderating effects (the effect of X on Y becomes stronger/weaker as Z increases).