Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Metrics of difference for comparing pairs of variables or pairs of maps representing real or categorical variables at original and multiple resolutions.
Supports propensity score-based methodsâ including matching, stratification, and weightingâ for estimating causal treatment effects. It also implements calibration using negative control outcomes to enhance robustness. debiasedTrialEmulation facilitates effect estimation for both binary and time-to-event outcomes, supporting risk ratio (RR), odds ratio (OR), and hazard ratio (HR) as effect measures. It integrates statistical modeling and visualization tools to assess covariate balance, equipoise, and bias calibration. Additional methodsâ including approaches to address immortal time bias, information bias, selection bias, and informative censoringâ are under development. Users interested in these extended features are encouraged to contact the package authors.
Access Datastream content through <https://product.datastream.com/dswsclient/Docs/Default.aspx>., our historical financial database with over 35 million individual instruments or indicators across all major asset classes, including over 19 million active economic indicators. It features 120 years of data, across 175 countries â the information you need to interpret market trends, economic cycles, and the impact of world events. Data spans bond indices, bonds, commodities, convertibles, credit default swaps, derivatives, economics, energy, equities, equity indices, ESG, estimates, exchange rates, fixed income, funds, fundamentals, interest rates, and investment trusts. Unique content includes I/B/E/S Estimates, Worldscope Fundamentals, point-in-time data, and Reuters Polls. Alongside the content, sit a set of powerful analytical tools for exploring relationships between different asset types, with a library of customizable analytical functions. In-house timeseries can also be uploaded using the package to comingle with Datastream maintained datasets, use with these analytical tools and displayed in Datastreamâ s flexible charting facilities in Microsoft Office.
This package provides a collection of asymmetrical kernels belong to lifetime distributions for kernel density estimation is presented. Mean Squared Errors (MSE) are calculated for estimated curves. For this purpose, R functions allow the distribution to be Gamma, Exponential or Weibull. For details see Chen (2000a,b), Jin and Kawczak (2003) and Salha et al. (2014) <doi:10.12988/pms.2014.4616>.
Allows you to define rules which can be used to verify a given dataset. The package acts as a thin wrapper around more powerful data packages such as dplyr', data.table', arrow', and DBI ('SQL'), which do the heavy lifting.
Divide taxonomic occurrence data into geographic regions of fair comparison, with three customisable methods to standardise area and extent. Calculate common biodiversity and range-size metrics on subsampled data. Background theory and practical considerations for the methods are described in Antell and others (2024) <doi:10.1017/pab.2023.36>.
Area under the curve (AUC; Myerson et al., 2001) <doi:10.1901/jeab.2001.76-235> is a popular measure used in discounting research. Although the calculation of AUC is standardized, there are differences in AUC based on some assumptions. For example, Myerson et al. (2001) <doi:10.1901/jeab.2001.76-235> assumed that (with delay discounting data) a researcher would impute an indifference point at zero delay equal to the value of the larger, later outcome. However, this practice is not clearly followed. This imputed zero-delay indifference point plays an important role in log and ordinal versions of AUC. Ordinal and log versions of AUC are described by Borges et al. (2016)<doi:10.1002/jeab.219>. The package can calculate all three versions of AUC [and includes a new version: IHS(AUC)], impute indifference points when x = 0, calculate ordinal AUC in the case of Halton sampling of x-values, and account for probability discounting AUC.
Compute the dynamic threshold panel model suggested by (Stephanie Kremer, Alexander Bick and Dieter Nautz (2013) <doi:10.1007/s00181-012-0553-9>) in which they extended the (Hansen (1999) <doi: 10.1016/S0304-4076(99)00025-1>) original static panel threshold estimation and the Caner and (Hansen (2004) <doi:10.1017/S0266466604205011>) cross-sectional instrumental variable threshold model, where generalized methods of moments type estimators are used.
Calculate posterior modes and credible intervals of parameters of the Dixon-Simon model for subgroup analysis (with binary covariates) in clinical trials. For details of the methodology, please refer to D.O. Dixon and R. Simon (1991), Biometrics, 47: 871-881.
This package creates a data frame containing the metadata associated with the documentation of a collection of R packages. It allows for linking topic names to their corresponding documentation online. If you maintain a universe meta-package, it helps create a comprehensive reference for its website.
Models the relationship between dose levels and responses in a pharmacological experiment using the 4 Parameter Logistic model. Traditional packages on dose-response modelling such as drc and nplr often draw errors due to convergence failure especially when data have outliers or non-logistic shapes. This package provides robust estimation methods that are less affected by outliers and other initialization methods that work well for data lacking logistic shapes. We provide the bounds on the parameters of the 4PL model that prevent parameter estimates from diverging or converging to zero and base their justification in a statistical principle. These methods are used as remedies to convergence failure problems. Gadagkar, S. R. and Call, G. B. (2015) <doi:10.1016/j.vascn.2014.08.006> Ritz, C. and Baty, F. and Streibig, J. C. and Gerhard, D. (2015) <doi:10.1371/journal.pone.0146021>.
Collection of functions for distributed lag linear and non-linear models.
Differential partial correlation identification with the ridge and the fusion penalties.
This package provides access to Dataverse APIs <https://dataverse.org/> (versions 4-5), enabling data search, retrieval, and deposit. For Dataverse versions <= 3.0, use the archived dvn package <https://cran.r-project.org/package=dvn>.
An add-on package to DImodels for the fitting of biodiversity and ecosystem function relationship study data with multiple ecosystem function responses and/or time points. This package uses the multivariate and repeated measures Diversity-Interactions (DI) methods developed by Kirwan et al. (2009) <doi:10.1890/08-1684.1>, Finn et al. (2013) <doi:10.1111/1365-2664.12041>, and Dooley et al. (2015) <doi:10.1111/ele.12504>.
Add a "Did You Mean" feature to the R interactive. With this package, error messages for misspelled input of variable names or package names suggest what you really want to do in addition to notification of the mistake.
Builds both ROC (Receiver Operating Characteristic) and DET (Detection Error Tradeoff) curves from a set of predictors, which are the results of a binary classification system. The curves give a general vision of the performance of the classifier, and are useful for comparing performance of different systems.
Creating, and refining data nuggets. Data nuggets reduce a large dataset into a small collection of nuggets of data, each containing a center (location), weight (importance), and scale (variability) parameter. Data nugget centers are created by choosing observations in the dataset which are as equally spaced apart as possible. Data nugget weights are created by counting the number observations closest to a given data nugget center. We then say the data nugget contains these observations and the data nugget center is recalculated as the mean of these observations. Data nugget scales are created by calculating the trace of the covariance matrix of the observations contained within a data nugget divided by the dimension of the dataset. Data nuggets are refined by splitting data nuggets which have scales or shapes (defined as the ratio of the two largest eigenvalues of the covariance matrix of the observations contained within the data nugget) Reference paper: [1] Beavers, T. E., Cheng, G., Duan, Y., Cabrera, J., Lubomirski, M., Amaratunga, D., & Teigler, J. E. (2024). Data Nuggets: A Method for Reducing Big Data While Preserving Data Structure. Journal of Computational and Graphical Statistics, 1-21. [2] Cherasia, K. E., Cabrera, J., Fernholz, L. T., & Fernholz, R. (2022). Data Nuggets in Supervised Learning. \emphIn Robust and Multivariate Statistical Methods: Festschrift in Honor of David E. Tyler (pp. 429-449). Cham: Springer International Publishing.
Connect to the DocuSign Rest API <https://www.docusign.com/p/RESTAPIGuide/RESTAPIGuide.htm>, which supports embedded signing, and sending of documents.
The DImodels package is suitable for analysing data from biodiversity and ecosystem function studies using the Diversity-Interactions (DI) modelling approach introduced by Kirwan et al. (2009) <doi:10.1890/08-1684.1>. Suitable data will contain proportions for each species and a community-level response variable, and may also include additional factors, such as blocks or treatments. The package can perform data manipulation tasks, such as computing pairwise interactions (the DI_data() function), can perform an automated model selection process (the autoDI() function) and has the flexibility to fit a wide range of user-defined DI models (the DI() function).
Improves the balance of optimal matching with near-fine balance by giving penalties on the unbalanced covariates with the unbalanced directions. Many directional penalties can also be viewed as Lagrange multipliers, pushing a matched sample in the direction of satisfying a linear constraint that would not be satisfied without penalization. Yu and Rosenbaum (2019) <doi:10.1111/biom.13098>.
Demonstration code showing how (univariate) kernel density estimates are computed, at least conceptually, and allowing users to experiment with different kernels, should they so wish. The method used follows directly the definition, but gains efficiency by replacing the observations by frequencies in a very fine grid covering the sample range. A canonical reference is B. W. Silverman, (1998) <doi: 10.1201/9781315140919>. NOTE: the density function in the stats package uses a more sophisticated method based on the fast Fourier transform and that function should be used if computational efficiency is a prime consideration.
Perform nonparametric Bayesian analysis using Dirichlet processes without the need to program the inference algorithms. Utilise included pre-built models or specify custom models and allow the dirichletprocess package to handle the Markov chain Monte Carlo sampling. Our Dirichlet process objects can act as building blocks for a variety of statistical models including and not limited to: density estimation, clustering and prior distributions in hierarchical models. See Teh, Y. W. (2011) <https://www.stats.ox.ac.uk/~teh/research/npbayes/Teh2010a.pdf>, among many other sources.
Extends package distr by functionals, distances, and conditional distributions.