Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Generate continuous (normal, non-normal, or mixture distributions), binary, ordinal, and count (regular or zero-inflated, Poisson or Negative Binomial) variables with a specified correlation matrix, or one continuous variable with a mixture distribution. This package can be used to simulate data sets that mimic real-world clinical or genetic data sets (i.e., plasmodes, as in Vaughan et al., 2009 <DOI:10.1016/j.csda.2008.02.032>). The methods extend those found in the SimMultiCorrData R package. Standard normal variables with an imposed intermediate correlation matrix are transformed to generate the desired distributions. Continuous variables are simulated using either Fleishman (1978)'s third order <DOI:10.1007/BF02293811> or Headrick (2002)'s fifth order <DOI:10.1016/S0167-9473(02)00072-5> polynomial transformation method (the power method transformation, PMT). Non-mixture distributions require the user to specify mean, variance, skewness, standardized kurtosis, and standardized fifth and sixth cumulants. Mixture distributions require these inputs for the component distributions plus the mixing probabilities. Simulation occurs at the component level for continuous mixture distributions. The target correlation matrix is specified in terms of correlations with components of continuous mixture variables. These components are transformed into the desired mixture variables using random multinomial variables based on the mixing probabilities. However, the package provides functions to approximate expected correlations with continuous mixture variables given target correlations with the components. Binary and ordinal variables are simulated using a modification of ordsample() in package GenOrd'. Count variables are simulated using the inverse CDF method. There are two simulation pathways which calculate intermediate correlations involving count variables differently. Correlation Method 1 adapts Yahav and Shmueli's 2012 method <DOI:10.1002/asmb.901> and performs best with large count variable means and positive correlations or small means and negative correlations. Correlation Method 2 adapts Barbiero and Ferrari's 2015 modification of the GenOrd package <DOI:10.1002/asmb.2072> and performs best under the opposite scenarios. The optional error loop may be used to improve the accuracy of the final correlation matrix. The package also contains functions to calculate the standardized cumulants of continuous mixture distributions, check parameter inputs, calculate feasible correlation boundaries, and summarize and plot simulated variables.
Calculating daily global solar radiation at horizontal surface using several well-known models (i.e. Angstrom-Prescott, Supit-Van Kappel, Hargreaves, Bristow and Campbell, and Mahmood-Hubbard), and model calibration based on ground-truth data, and (3) model auto-calibration. The FAO Penmann-Monteith equation to calculate evapotranspiration is also included.
Allows you to make clean, good-looking scatter plots with the option to easily add marginal density or box plots on the axes. It is also available as a module for jamovi (see <https://www.jamovi.org> for more information). Scatr is based on the cowplot package by Claus O. Wilke and the ggplot2 package by Hadley Wickham.
This package provides a sparklyr extension adding the capability to work easily with nested data.
Starting from a Regression Model, it provides a stepwise procedure to select the linear predictor.
Interfaces with the SigOpt API. More info at <https://sigopt.com>.
This package provides a scalable Gibbs sampling implementation for high dimensional Bayesian regression with the continuous spike-and-slab prior. Niloy Biswas, Lester Mackey and Xiao-Li Meng, "Scalable Spike-and-Slab" (2022) <arXiv:2204.01668>.
In forensics, it is common and effective practice to analyse glass fragments from the scene and suspects to gain evidence of placing a suspect at the crime scene. This kind of analysis involves comparing the physical and chemical attributes of glass fragments that exist on both the person and at the crime scene, and assessing the significance in a likeness that they share. The package implements the Scott-Knott Modification 2 algorithm (SKM2) (Christopher M. Triggs and James M. Curran and John S. Buckleton and Kevan A.J. Walsh (1997) <doi:10.1016/S0379-0738(96)02037-3> "The grouping problem in forensic glass analysis: a divisive approach", Forensic Science International, 85(1), 1--14) for small sample glass fragment analysis using the refractive index (ri) of a set of glass samples. It also includes an experimental multivariate analog to the Scott-Knott algorithm for similar analysis on glass samples with multiple chemical concentration variables and multiple samples of the same item; testing against the Hotellings T^2 distribution (J.M. Curran and C.M. Triggs and J.R. Almirall and J.S. Buckleton and K.A.J. Walsh (1997) <doi:10.1016/S1355-0306(97)72197-X> "The interpretation of elemental composition measurements from forensic glass evidence", Science & Justice, 37(4), 241--244).
Input/Output, processing and visualization of spectra taken with different spectrometers, including SVC (Spectra Vista), ASD and PSR (Spectral Evolution). Implements an S3 class spectra that other packages can build on. Provides methods to access, plot, manipulate, splice sensor overlap, vector normalize and smooth spectra.
Sonification (or audification) is the process of representing data by sounds in the audible range. This package provides the R function sonify() that transforms univariate data, sampled at regular or irregular intervals, into a continuous sound with time-varying frequency. The ups and downs in frequency represent the ups and downs in the data. Sonify provides a substitute for R's plot function to simplify data analysis for the visually impaired.
This package contains functions for estimating the STARTS model of Kenny and Zautra (1995, 2001) <DOI:10.1037/0022-006X.63.1.52>, <DOI:10.1037/10409-008>. Penalized maximum likelihood estimation and Markov Chain Monte Carlo estimation are also provided, see Luedtke, Robitzsch and Wagner (2018) <DOI:10.1037/met0000155>.
Identify 17 Sustainable Development Goals and associated 169 targets in text.
Validate data.frames against schemas to ensure that data matches expectations. Define schemas using tidyselect and predicate functions for type consistency, nullability, and more. Schema failure messages can be tailored for non-technical users and are ideal for user-facing applications such as in shiny or plumber'.
Implementation of the SIC epsilon-telescope method, either using single or distributional (multiparameter) regression. Includes classical regression with normally distributed errors and robust regression, where the errors are from the Laplace distribution. The "smooth generalized normal distribution" is used, where the estimation of an additional shape parameter allows the user to move smoothly between both types of regression. See O'Neill and Burke (2022) "Robust Distributional Regression with Automatic Variable Selection" for more details. <doi:10.48550/arXiv.2212.07317>. This package also contains the data analyses from O'Neill and Burke (2023). "Variable selection using a smooth information criterion for distributional regression models". <doi:10.1007/s11222-023-10204-8>.
An overall test for seasonality of a given time series in addition to a set of individual seasonality tests as described by Ollech and Webel (forthcoming): An overall seasonality test. Bundesbank Discussion Paper.
Provide various functions and tools to help fit models for estimating treatment effects in stepped wedge cluster randomized trials. Implements methods described in Kenny, Voldal, Xia, and Heagerty (2022) "Analysis of stepped wedge cluster randomized trials in the presence of a time-varying treatment effect", <doi:10.1002/sim.9511>.
On discrete data spectral analysis is performed by Fourier and Hilbert transforms as well as with model based analysis called Lomb-Scargle method. Fragmented and irregularly spaced data can be processed in almost all methods. Both, FFT as well as LOMB methods take multivariate data and return standardized PSD. For didactic reasons an analytical approach for deconvolution of noise spectra and sampling function is provided. A user friendly interface helps to interpret the results.
Construct various types of space-filling designs, including Latin hypercube designs, clustering-based designs, maximin designs, maximum projection designs, and uniform designs (Joseph 2016 <doi:10.1080/08982112.2015.1100447>). It also offers the option to optimize designs based on user-defined criteria. This work is supported by U.S. National Science Foundation grant DMS-2310637.
Starting from a given object representing a fitted model (within a certain set of model classes) whose (non-)linear predictor includes some ordered factor(s) among the explanatory variables, a new model is constructed and fitted where each named factor is replaced by a single numeric score, suitably chosen so that the new variable produces a fit comparable with the standard methodology based on a set of polynomial contrasts. Two variants of the present approach have been developed, one in each of the next references: Azzalini (2023) <doi:10.1002/sta4.624>, (2024) <doi:10.48550/arXiv.2406.15933>.
This package provides a SAS interface, through SASPy'(<https://sassoftware.github.io/saspy/>) and reticulate'(<https://rstudio.github.io/reticulate/>). This package helps you create SAS sessions, execute SAS code in remote SAS servers, retrieve execution results and log, and exchange datasets between SAS and R'. It also helps you to install SASPy and create a configuration file for the connection. Please review the SASPy license file as instructed so that you comply with its separate and independent license.
Reliability of (normal) stress-strength models and for building two-sided or one-sided confidence intervals according to different approximate procedures.
Perform spatial analysis on network. Implement several methods for spatial analysis on network: Network Kernel Density estimation, building of spatial matrices based on network distance ('listw objects from spdep package), K functions estimation for point pattern analysis on network, k nearest neighbours on network, reachable area calculation, and graph generation References: Okabe et al (2019) <doi:10.1080/13658810802475491>; Okabe et al (2012, ISBN:978-0470770818);Baddeley et al (2015, ISBN:9781482210200).
This package provides a function that behaves nearly as base::source() but implements a caching mechanism on disk, project based. It allows to quasi source() R scripts that gather data but can fail or consume to much time to respond even if nothing new is expected. It comes with tools to check and execute on demand or when cache is invalid the script.
Stacked ensemble for regression tasks based on mlr3 framework with a pipeline for preprocessing numeric and factor features and hyper-parameter tuning using grid or random search.