Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a penalized/non-penalized implementation for dynamic regression in the presence of autocorrelated residuals (DREGAR) using iterative penalized/ordinary least squares. It applies Mallows CP, AIC, BIC and GCV to select the tuning parameters.
This package provides functions to download and treat data regarding the Brazilian Amazon region from a variety of official sources.
This package contains functions for the MCMC simulation of dyadic network models j2 (Zijlstra, 2017, <doi:10.1080/0022250X.2017.1387858>) and p2 (Van Duijn, Snijders & Zijlstra, 2004, <doi: 10.1046/j.0039-0402.2003.00258.x>), the multilevel p2 model (Zijlstra, Van Duijn & Snijders (2009) <doi: 10.1348/000711007X255336>), and the bidirectional (multilevel) counterpart of the the multilevel p2 model as described in Zijlstra, Van Duijn & Snijders (2009) <doi: 10.1348/000711007X255336>, the (multilevel) b2 model.
Designed to create a basic data dictionary and append to the original dataset's attributes list. The package makes use of a tidy dataset and creates a data frame that will serve as a linker that will aid in building the dictionary. The dictionary is then appended to the list of the original dataset's attributes. The user will have the option of entering variable and item descriptions by writing code or use alternate functions that will prompt the user to add these.
Statistical hypothesis testing of pattern heterogeneity via differences in underlying distributions across multiple contingency tables. Five tests are included: the comparative chi-squared test (Song et al. 2014) <doi:10.1093/nar/gku086> (Zhang et al. 2015) <doi:10.1093/nar/gkv358>, the Sharma-Song test (Sharma et al. 2021) <doi:10.1093/bioinformatics/btab240>, the heterogeneity test, the marginal-change test (Sharma et al. 2020) <doi:10.1145/3388440.3412485>, and the strength test (Sharma et al. 2020) <doi:10.1145/3388440.3412485>. Under the null hypothesis that row and column variables are statistically independent and joint distributions are equal, their test statistics all follow an asymptotically chi-squared distribution. A comprehensive type analysis categorizes the relation among the contingency tables into type null, 0, 1, and 2 (Sharma et al. 2020) <doi:10.1145/3388440.3412485>. They can identify heterogeneous patterns that differ in either the first order (marginal) or the second order (differential departure from independence). Second-order differences reveal more fundamental changes than first-order differences across heterogeneous patterns.
This package provides density, distribution function, quantile function and random generation for the split normal and split-t distributions, and computes their mean, variance, skewness and kurtosis for the two distributions (Li, F, Villani, M. and Kohn, R. (2010) <doi:10.1016/j.jspi.2010.04.031>).
Reaction rate dynamics can be retrieved from metabolite concentration time courses. User has to provide corresponding stoichiometric matrix but not a regulation model (Michaelis-Menten or similar). Instead of solving an ordinary differential equation (ODE) system describing the evolution of concentrations, we use B-splines to catch the concentration and rate dynamics then solve a least square problem on their coefficients with non-negativity (and optionally monotonicity) constraints. Constraints can be also set on initial values of concentration. The package dynafluxr can be used as a library but also as an application with command line interface dynafluxr::cli("-h") or graphical user interface dynafluxr::gui().
Diagnostics for linear L1 regression (also known as LAD - Least Absolute Deviations), including: estimation, confidence intervals, tests of hypotheses, measures of leverage, methods of diagnostics for L1 regression, special diagnostics graphs and measures of leverage. The algorithms are based in Dielman (2005) <doi:10.1080/0094965042000223680>, Elian et al. (2000) <doi:10.1080/03610920008832518> and Dodge (1997) <doi:10.1006/jmva.1997.1666>. This package builds on the quantreg package, which is a well-established package for tuning quantile regression models. There are also tests to verify if the errors have a Laplace distribution based on the work of Puig and Stephens (2000) <doi:10.2307/1270952>.
Generate point data for representing people within spatial data. This collects a suite of tools for creating simple dot density maps. Several functions from different spatial packages are standardized to take the same arguments so that they can be easily substituted for each other.
Estimates dose-response relations from summarized dose-response data and to combines them according to principles of (multivariate) random-effects models.
This package provides methods to estimate the optimal treatment regime among all linear regimes via smoothed estimation methods, and construct element-wise confidence intervals for the optimal linear treatment regime vector, as well as the confidence interval for the optimal value via wild bootstrap procedures, if the population follows treatments recommended by the optimal linear regime. See more details in: Wu, Y. and Wang, L. (2021), "Resampling-based Confidence Intervals for Model-free Robust Inference on Optimal Treatment Regimes", Biometrics, 77: 465รข 476, <doi:10.1111/biom.13337>.
Generate descriptive statistics such as measures of location, dispersion, frequency tables, cross tables, group summaries and multiple one/two way tables.
Apache licensed alternative to Highcharter which provides functions for both fast and beautiful interactive visualization for Markdown and Shiny'.
This package provides data transformations, estimation utilities, predictive evaluation measures and simulation functions for discrete time survival analysis.
This package provides a distance density clustering (DDC) algorithm in R. DDC uses dynamic time warping (DTW) to compute a similarity matrix, based on which cluster centers and cluster assignments are found. DDC inherits dynamic time warping (DTW) arguments and constraints. The cluster centers are centroid points that are calculated using the DTW Barycenter Averaging (DBA) algorithm. The clustering process is divisive. At each iteration, cluster centers are updated and data is reassigned to cluster centers. Early stopping is possible. The output includes cluster centers and clustering assignment, as described in the paper (Ma et al (2017) <doi:10.1109/ICDMW.2017.11>).
Plan optimal sample size allocation and go/no-go decision rules for phase II/III drug development programs with time-to-event, binary or normally distributed endpoints when assuming fixed treatment effects or a prior distribution for the treatment effect, using methods from Kirchner et al. (2016) <doi:10.1002/sim.6624> and Preussler (2020). Optimal is in the sense of maximal expected utility, where the utility is a function taking into account the expected cost and benefit of the program. It is possible to extend to more complex settings with bias correction (Preussler S et al. (2020) <doi:10.1186/s12874-020-01093-w>), multiple phase III trials (Preussler et al. (2019) <doi:10.1002/bimj.201700241>), multi-arm trials (Preussler et al. (2019) <doi:10.1080/19466315.2019.1702092>), and multiple endpoints (Kieser et al. (2018) <doi:10.1002/pst.1861>).
Estimation of a density from grouped (tabulated) summary statistics evaluated in each of the big bins (or classes) partitioning the support of the variable. These statistics include class frequencies and central moments of order one up to four. The log-density is modelled using a linear combination of penalised B-splines. The multinomial log-likelihood involving the frequencies adds up to a roughness penalty based on the differences in the coefficients of neighbouring B-splines and the log of a root-n approximation of the sampling density of the observed vector of central moments in each class. The so-obtained penalized log-likelihood is maximized using the EM algorithm to get an estimate of the spline parameters and, consequently, of the variable density and related quantities such as quantiles, see Lambert, P. (2021) <arXiv:2107.03883> for details.
Predict future values with hybrid combinations of Pattern Sequence based Forecasting (PSF), Autoregressive Integrated Moving Average (ARIMA), Empirical Mode Decomposition (EMD) and Ensemble Empirical Mode Decomposition (EEMD) methods based hybrid methods.
Estimates Two-way Fixed Effects difference-in-differences/event-study models using the approach proposed by Gardner (2021) <doi:10.48550/arXiv.2207.05943>. To avoid the problems caused by OLS estimation of the Two-way Fixed Effects model, this function first estimates the fixed effects and covariates using untreated observations and then in a second stage, estimates the treatment effects.
Load configuration from a .env file, that is in the current working directory, into environment variables.
The dynpred package contains functions for dynamic prediction in survival analysis.
Various utilities for the Davies distribution.
Identifies code blocks that have a high level of similarity within a set of R files.
Weighted frequency and contingency tables of categorical variables and of the comparison of the mean value of a numerical variable by the levels of a factor, and methods to produce xtable objects of the tables and to plot them. There are also functions to facilitate the character encoding conversion of objects, to quickly convert fixed width files into csv ones, and to export a data.frame to a text file with the necessary R and SPSS codes to reread the data.