Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides tools developed to facilitate the establishment of the rank and social hierarchy for gregarious animals by the Si method developed by Kondo & Hurnik (1990)<doi:10.1016/0168-1591(90)90125-W>. It is also possible to determine the number of agonistic interactions between two individuals, sociometric and dyadics matrix from dataset obtained through electronic bins. In addition, it is possible plotting the results using a bar plot, box plot, and sociogram.
Includes built-in methods for generating long SQL CASE statements, and other SQL statements that may otherwise be arduous to construct by hand.The generated statement can easily be concatenated to string literals to form queries to SQL'-like databases, such as when using the RODBC package. The current methods include casewhen() for building CASE statements, inlist() for building IN statements, and updatetable() for building UPDATE statements.
This package provides a simple to use summary function that can be used with pipes and displays nicely in the console. The default summary statistics may be modified by the user as can the default formatting. Support for data frames and vectors is included, and users can implement their own skim methods for specific object types as described in a vignette. Default summaries include support for inline spark graphs. Instructions for managing these on specific operating systems are given in the "Using skimr" vignette and the README.
Get sun position, sunlight phases (times for sunrise, sunset, dusk, etc.), moon position and lunar phase for the given location and time. Most calculations are based on the formulas given in Astronomy Answers articles about position of the sun and the planets : <https://www.aa.quae.nl/en/reken/zonpositie.html>.
Metapackage for implementing a variety of event-based models, with a focus on spatially explicit models. These include raster-based, event-based, and agent-based models. The core simulation components (provided by SpaDES.core') are built upon a discrete event simulation (DES; see Matloff (2011) ch 7.8.3 <https://nostarch.com/artofr.htm>) framework that facilitates modularity, and easily enables the user to include additional functionality by running user-built simulation modules (see also SpaDES.tools'). Included are numerous tools to visualize rasters and other maps (via quickPlot'), and caching methods for reproducible simulations (via reproducible'). Tools for running simulation experiments are provided by SpaDES.experiment'. Additional functionality is provided by the SpaDES.addins and SpaDES.shiny packages.
Fast, lightweight toolkit for data splitting. Data sets can be partitioned into disjoint groups (e.g. into training, validation, and test) or into (repeated) k-folds for subsequent cross-validation. Besides basic splits, the package supports stratified, grouped as well as blocked splitting. Furthermore, cross-validation folds for time series data can be created. See e.g. Hastie et al. (2001) <doi:10.1007/978-0-387-84858-7> for the basic background on data partitioning and cross-validation.
This package implements the Self-Similarity Test for Normality (SSTN), a new statistical test designed to assess whether a given sample originates from a normal distribution. The procedure is based on iteratively estimating the characteristic function of the sum of standardized i.i.d. random variables and comparing it to the characteristic function of the standard normal distribution. A Monte Carlo procedure is used to determine the empirical distribution of the test statistic under the null hypothesis. Details of the methodology are described in Anarat and Schwender (2025), "A normality test based on self-similarity" (Submitted).
Data used in Taback, N. (2022). Design and Analysis of Experiments and Observational Studies using R. Chapman & Hall/CRC.
This package implements the Scout method for regression, described in "Covariance-regularized regression and classification for high-dimensional problems", by Witten and Tibshirani (2008), Journal of the Royal Statistical Society, Series B 71(3): 615-636.
Extracts and summarizes metadata from data frames, including variable names, labels, types, and missing values. Computes compact descriptive statistics, frequency tables, and cross-tabulations to assist with efficient data exploration. Includes an interactive and exportable codebook generator for documenting variable metadata. Facilitates the identification of missing data patterns and structural issues in datasets. Designed to streamline initial data management and exploratory analysis workflows within R'.
Pull data from the STAT Search Analytics API <https://help.getstat.com/knowledgebase/api-services/>. It was developed by the Search Discovery team to help analyze keyword ranking data.
This package provides a set of functions to build a scoring model from beginning to end, leading the user to follow an efficient and organized development process, reducing significantly the time spent on data exploration, variable selection, feature engineering, binning and model selection among other recurrent tasks. The package also incorporates monotonic and customized binning, scaling capabilities that transforms logistic coefficients into points for a better business understanding and calculates and visualizes classic performance metrics of a classification model.
This package provides functions to calculate some point estimators and estimate their variance under unequal probability sampling without replacement. Single and two-stage sampling designs are considered. Some approximations for the second-order inclusion probabilities (joint inclusion probabilities) are available (sample and population based). A variety of Jackknife variance estimators are implemented. Almost every function is written in C (compiled) code for faster results. The functions incorporate some performance improvements for faster results with large datasets.
Web application using shiny for the SSD (Species Sensitivity Distribution) module of the MOSAIC (MOdeling and StAtistical tools for ecotoxICology) platform. It estimates the Hazardous Concentration for x% of the species (HCx) from toxicity values that can be censored and provides various plotting options for a better understanding of the results. See our companion paper Kon Kam King et al. (2014) <doi:10.48550/arXiv.1311.5772>.
This package implements a custom matrix input field.
This package provides functions for computing test subscores using different methods in both classical test theory (CTT) and item response theory (IRT). This package enables three types of subscoring methods within the framework of CTT and IRT, including (1) Wainer's augmentation method (Wainer et. al., 2001) <doi:10.4324/9781410604729>, (2) Haberman's subscoring methods (Haberman, 2008) <doi:10.3102/1076998607302636>, and (3) Yen's objective performance index (OPI; Yen, 1987) <https://www.ets.org/research/policy_research_reports/publications/paper/1987/hrap>. It also includes functions to compute Proportional Reduction of Mean Squared Errors (PRMSEs) in Haberman's methods which are used to examine whether test subscores are of added value. In addition, the package includes a function to assess the local independence assumption of IRT with Yen's Q3 statistic (Yen, 1984 <doi:10.1177/014662168400800201>; Yen, 1993 <doi:10.1111/j.1745-3984.1993.tb00423.x>).
In a scatterplot where the response variable is Gaussian, Poisson or binomial, we consider the case in which the mean function is smooth with a change-point, which is a mode, an inflection point or a jump point. The main routine estimates the mean curve and the change-point as well using shape-restricted B-splines. An optional subroutine delivering a bootstrap confidence interval for the change-point is incorporated in the main routine.
Settings and functions to extend the knitr Stata engine.
Simple utilities to design and generate density functions on bounded regions in space and space-time, and simulate independent, identically distributed data therefrom. See Davies & Lawson (2019) <doi:10.1080/00949655.2019.1575066> for example.
This package implements the Shimazaki-Shinomoto method for optimizing the bin width of a histogram. This method minimizes the mean integrated squared error (MISE) and features a C++ backend for high performance and shift-averaging to remove edge-position bias. Ideally suits for time-dependent rate estimation and identifying intrinsic data structures. Supports both 1D and 2D data distributions. For more details see Shimazaki and Shinomoto (2007) "A Method for Selecting the Bin Size of a Time Histogram" <doi:10.1162/neco.2007.19.6.1503>.
Improves the interpretation of the Standardized Precipitation Index under changing climate conditions. The package uses the nonstationary approach proposed in Blain et al. (2022) <doi:10.1002/joc.7550> to detect trends in rainfall quantities and to quantify the effect of such trends on the probability of a drought event occurring.
This package provides a collection of self-labeled techniques for semi-supervised classification. In semi-supervised classification, both labeled and unlabeled data are used to train a classifier. This learning paradigm has obtained promising results, specifically in the presence of a reduced set of labeled examples. This package implements a collection of self-labeled techniques to construct a classification model. This family of techniques enlarges the original labeled set using the most confident predictions to classify unlabeled data. The techniques implemented can be applied to classification problems in several domains by the specification of a supervised base classifier. At low ratios of labeled data, it can be shown to perform better than classical supervised classifiers.
R language bindings for SolveBio's API. SolveBio is a biomedical knowledge hub that enables life science organizations to collect and harmonize the complex, disparate "multi-omic" data essential for today's R&D and BI needs.
These are tools that allow users to do time series diagnostics, primarily tests of unit root, by way of simulation. While there is nothing necessarily wrong with the received wisdom of critical values generated decades ago, simulation provides its own perks. Not only is simulation broadly informative as to what these various test statistics do and what are their plausible values, simulation provides more flexibility for assessing unit root by way of different thresholds or different hypothesized distributions.