Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Implementation of analytical models for estimating streamflow depletion due to groundwater pumping, and other related tools. Functions are broadly split into two groups: (1) analytical streamflow depletion models, which estimate streamflow depletion for a single stream reach resulting from groundwater pumping; and (2) depletion apportionment equations, which distribute estimated streamflow depletion among multiple stream reaches within a stream network. See Zipper et al. (2018) <doi:10.1029/2018WR022707> for more information on depletion apportionment equations and Zipper et al. (2019) <doi:10.1029/2018WR024403> for more information on analytical depletion functions, which combine analytical models and depletion apportionment equations.
Extracts and summarizes metadata from data frames, including variable names, labels, types, and missing values. Computes compact descriptive statistics, frequency tables, and cross-tabulations to assist with efficient data exploration. Includes an interactive and exportable codebook generator for documenting variable metadata. Facilitates the identification of missing data patterns and structural issues in datasets. Designed to streamline initial data management and exploratory analysis workflows within R'.
Description: Provides functions for simulation and inference for stochastic differential equations (SDEs). It accompanies the book "Simulation and Inference for Stochastic Differential Equations: With R Examples" (Iacus, 2008, Springer; ISBN: 978-0-387-75838-1).
This implements the Brunton et al (2016; PNAS <doi:10.1073/pnas.1517384113>) sparse identification algorithm for finding ordinary differential equations for a measured system from raw data (SINDy). The package includes a set of additional tools for working with raw data, with an emphasis on cognitive science applications (Dale and Bhat, 2018 <doi:10.1016/j.cogsys.2018.06.020>). See <https://github.com/racdale/sindyr> for examples and updates.
Data related to the Salem Witch Trials Datasets and tutorials documenting the witch accusations and trials centered around Salem, Massachusetts in 1692. Originally assembled by Richard B. Latner of Tulane University for his website <https://www2.tulane.edu/~salem/index.html>. The data sets include information on 152 accused witches, members of the Salem Village Committee, signatories of petitions related to the events, and tax data for Salem Village.
Generate simulated datasets from an initial underlying distribution and apply transformations to obtain realistic data. Implements the NORTA (Normal-to-anything) approach from Cario and Nelson (1997) and other data generating mechanisms. Simple network visualization tools are provided to facilitate communicating the simulation setup.
Several functions and S3 methods to construct a super learner in the presence of censored times-to-event and to evaluate its prognostic capacities.
This package provides tools to simulate realistic raw case data for an epidemic in the form of line lists and contacts using a branching process. Simulated outbreaks are parameterised with epidemiological parameters and can have age-structured populations, age-stratified hospitalisation and death risk and time-varying case fatality risk.
Provide regularized principal component analysis incorporating smoothness, sparseness and orthogonality of eigen-functions by using the alternating direction method of multipliers algorithm (Wang and Huang, 2017, <DOI:10.1080/10618600.2016.1157483>). The method can be applied to either regularly or irregularly spaced data, including 1D, 2D, and 3D.
Simulate survival times from standard parametric survival distributions (exponential, Weibull, Gompertz), 2-component mixture distributions, or a user-defined hazard, log hazard, cumulative hazard, or log cumulative hazard function. Baseline covariates can be included under a proportional hazards assumption. Time dependent effects (i.e. non-proportional hazards) can be included by interacting covariates with linear time or a user-defined function of time. Clustered event times are also accommodated. The 2-component mixture distributions can allow for a variety of flexible baseline hazard functions reflecting those seen in practice. If the user wishes to provide a user-defined hazard or log hazard function then this is possible, and the resulting cumulative hazard function does not need to have a closed-form solution. For details see the supporting paper <doi:10.18637/jss.v097.i03>. Note that this package is modelled on the survsim package available in the Stata software (see Crowther and Lambert (2012) <https://www.stata-journal.com/sjpdf.html?articlenum=st0275> or Crowther and Lambert (2013) <doi:10.1002/sim.5823>).
This package provides a simple, light, and robust interface between R and the Scryfall card data API <https://scryfall.com/docs/api>.
Analysis and plotting tools for snow profile data produced from manual snowpack observations and physical snowpack models. The functions in this package support snowpack and avalanche research by reading various formats of data (including CAAML, SMET, generic csv, and outputs from the snow cover model SNOWPACK), manipulate the data, and produce graphics such as stratigraphy and time series profiles. Package developed by the Simon Fraser University Avalanche Research Program <http://www.avalancheresearch.ca>. Graphics apply visualization concepts from Horton, Nowak, and Haegeli (2020, <doi:10.5194/nhess-20-1557-2020>).
Fast, lightweight toolkit for data splitting. Data sets can be partitioned into disjoint groups (e.g. into training, validation, and test) or into (repeated) k-folds for subsequent cross-validation. Besides basic splits, the package supports stratified, grouped as well as blocked splitting. Furthermore, cross-validation folds for time series data can be created. See e.g. Hastie et al. (2001) <doi:10.1007/978-0-387-84858-7> for the basic background on data partitioning and cross-validation.
This package provides a methodology to analyze how species occurrences change over time, particularly in relation to spatial and thermal factors. It facilitates the development of explanatory hypotheses about the impact of environmental shifts on species by analyzing historical presence data that includes temporal and geographic information. Approach described in Lobo et al., 2023 <doi:10.1002/ece3.10674>.
Sonification (or audification) is the process of representing data by sounds in the audible range. This package provides the R function sonify() that transforms univariate data, sampled at regular or irregular intervals, into a continuous sound with time-varying frequency. The ups and downs in frequency represent the ups and downs in the data. Sonify provides a substitute for R's plot function to simplify data analysis for the visually impaired.
Uses simulation to create prediction intervals for post-policy outcomes in interrupted time series (ITS) designs, following Miratrix (2020) <arXiv:2002.05746>. This package provides methods for fitting ITS models with lagged outcomes and variables to account for temporal dependencies. It then conducts inference via simulation, simulating a set of plausible counterfactual post-policy series to compare to the observed post-policy series. This package also provides methods to visualize such data, and also to incorporate seasonality models and smoothing and aggregation/summarization. This work partially funded by Arnold Ventures in collaboration with MDRC.
Implementation of the family of generalised age-period-cohort stochastic mortality models. This family of models encompasses many models proposed in the actuarial and demographic literature including the Lee-Carter (1992) <doi:10.2307/2290201> and the Cairns-Blake-Dowd (2006) <doi:10.1111/j.1539-6975.2006.00195.x> models. It includes functions for fitting mortality models, analysing their goodness-of-fit and performing mortality projections and simulations.
User-friendly functions which parse output of command line programs used to query Slurm. Morris A. Jette and Tim Wickberg (2023) <doi:10.1007/978-3-031-43943-8_1> describe Slurm in detail.
Apache Drill is a low-latency distributed query engine designed to enable data exploration and analysis on both relational and non-relational data stores, scaling to petabytes of data. Methods are provided that enable working with Apache Drill instances via the REST API, DBI methods and using dplyr'/'dbplyr idioms. Helper functions are included to facilitate using official Drill Docker images/containers.
Fast computation of multivariate analyses of small (10s to 100s markers) to big (1000s to 100000s) genotype data. Runs Principal Component Analysis allowing for centering, z-score standardization and scaling for genetic drift, projection of ancient samples to modern genetic space and multivariate tests for differences in group location (Permutation-Based Multivariate Analysis of Variance) and dispersion (Permutation-Based Multivariate Analysis of Dispersion).
This package contains methods to generate and evaluate semi-artificial data sets. Based on a given data set different methods learn data properties using machine learning algorithms and generate new data with the same properties. The package currently includes the following data generators: i) a RBF network based generator using rbfDDA() from package RSNNS', ii) a Random Forest based generator for both classification and regression problems iii) a density forest based generator for unsupervised data Data evaluation support tools include: a) single attribute based statistical evaluation: mean, median, standard deviation, skewness, kurtosis, medcouple, L/RMC, KS test, Hellinger distance b) evaluation based on clustering using Adjusted Rand Index (ARI) and FM c) evaluation based on classification performance with various learning models, e.g., random forests.
Import, create and assemble data needed to fit spatial-statistical stream-network models using the SSN2 package for R'. Streams, observations, and prediction locations are represented as simple features and specific tools provided to define topological relationships between features; calculate the hydrologic distances (with flow-direction preserved) and the spatial additive function used to weight converging stream segments; and export the topological, spatial, and attribute information to an `SSN` (spatial stream network) object, which can be efficiently stored, accessed and analysed in R'. A detailed description of methods used to calculate and format the spatial data can be found in Peterson, E.E. and Ver Hoef, J.M., (2014) <doi:10.18637/jss.v056.i02>.
Secure handling of API keys can be difficult. This package provides secure convenience functions for entering / handling API keys and opening connections via inversion of control on those keys. Works seamlessly between production and developer environments.
Basic functions for dealing with wav files and sound samples.