Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Forecasting univariate time series with different decomposition based Extreme Learning Machine models. For method details see Yu L, Wang S, Lai KK (2008). <doi:10.1016/j.eneco.2008.05.003>, Parida M, Behera MK, Nayak N (2018). <doi:10.1109/ICSESP.2018.8376723>.
This package provides a user friendly, easy to understand way of doing event history regression for marginal estimands of interest, including the cumulative incidence and the restricted mean survival, using the pseudo observation framework for estimation. For a review of the methodology, see Andersen and Pohar Perme (2010) <doi:10.1177/0962280209105020> or Sachs and Gabriel (2022) <doi:10.18637/jss.v102.i09>. The interface uses the well known formulation of a generalized linear model and allows for features including plotting of residuals, the use of sampling weights, and corrected variance estimation.
Forecasting time series with different decomposition based ARIMA models. For method details see Yu L, Wang S, Lai KK (2008). <doi:10.1016/j.eneco.2008.05.003>.
This package provides Some of the most important evaluation measures for evaluating a model. Just by giving the real and predicted class, measures such as accuracy, sensitivity, specificity, ppv, npv, fmeasure, mcc and ... will be returned.
User friendly interface based on the R package gstat to fit exponential parametric models to empirical semi-variograms in order to model the spatial correlation structure of health data. Geo-located health outcomes of survey participants may be used to model spatial effects on health in an ego-centred approach. The package contains a range of functions to help explore the spatial structure of the data as well as visualize the fit of exponential models for various metaparameter combinations with respect to the number of lag intervals and maximal distance. Furthermore, the outcome of interest can be adjusted for covariates by fitting a linear regression in a preliminary step before the semi-variogram fitting process.
This package provides a wrapper of different methods from Linear Algebra for the equations introduced in The Atlas of Economic Complexity and related literature. This package provides standard matrix and graph output that can be used seamlessly with other packages. See <doi:10.21105/joss.01866> for a summary of these methods and its evolution in literature.
Embed interactive charts to their Shiny applications. These charts will be generated by ECharts library developed by Baidu (<http://echarts.baidu.com/>). Current version supports line chart, bar chart, pie chart, scatter plot, gauge, word cloud, radar chart, tree map, and heat map.
This package provides a consistent set of functions for enriching and analyzing sovereign-level economic data. Economists, data scientists, and financial professionals can use the package to add standardized identifiers, demographic and macroeconomic indicators, and derived metrics such as gross domestic product per capita or government expenditure shares.
EB-PRS is a novel method that leverages information for effect sizes across all the markers to improve the prediction accuracy. No parameter tuning is needed in the method, and no external information is needed. This R-package provides the calculation of polygenic risk scores from the given training summary statistics and testing data. We can use EB-PRS to extract main information, estimate Empirical Bayes parameters, derive polygenic risk scores for each individual in testing data, and evaluate the PRS according to AUC and predictive r2. See Song et al. (2020) <doi:10.1371/journal.pcbi.1007565> for a detailed presentation of the method.
This package provides a system for calculating the optimal sampling effort, based on the ideas of "Ecological cost-benefit optimization" as developed by A. Underwood (1997, ISBN 0 521 55696 1). Data is obtained from simulated ecological communities with prep_data() which formats and arranges the initial data, and then the optimization follows the following procedure of four functions: (1) prep_data() takes the original dataset and creates simulated sets that can be used as a basis for estimating statistical power and type II error. (2) sim_beta() is used to estimate the statistical power for the different sampling efforts specified by the user. (3) sim_cbo() calculates then the optimal sampling effort, based on the statistical power and the sampling costs. Additionally, (4) scompvar() calculates the variation components necessary for (5) Underwood_cbo() to calculate the optimal combination of number of sites and samples depending on either an economic budget or on a desired statistical accuracy. Lastly, (6) plot_power() helps the user visualize the results of sim_beta().
This package provides functions and classes designed to handle and visualise epidemiological flows between locations. Also contains a statistical method for predicting disease spread from flow data initially described in Dorigatti et al. (2017) <doi:10.2807/1560-7917.ES.2017.22.28.30572>. This package is part of the RECON (<https://www.repidemicsconsortium.org/>) toolkit for outbreak analysis.
This package provides functions that compute probabilistic excursion sets, contour credibility regions, contour avoiding regions, and simultaneous confidence bands for latent Gaussian random processes and fields. The package also contains functions that calculate these quantities for models estimated with the INLA package. The main references for excursions are Bolin and Lindgren (2015) <doi:10.1111/rssb.12055>, Bolin and Lindgren (2017) <doi:10.1080/10618600.2016.1228537>, and Bolin and Lindgren (2018) <doi:10.18637/jss.v086.i05>. These can be generated by the citation function in R.
Implementation of uniformly most powerful invariant equivalence tests for one- and two-sample problems (paired and unpaired) as described in Wellek (2010, ISBN:978-1-4398-0818-4). Also one-sided alternatives (non-inferiority and non-superiority tests) are supported. Basically a variant of a t-test with (relaxed) null and alternative hypotheses exchanged.
The remit of the European Clinical Trials Data Base (EudraCT <https://eudract.ema.europa.eu/> ), or ClinicalTrials.gov <https://clinicaltrials.gov/>, is to provide open access to summaries of all registered clinical trial results; thus aiming to prevent non-reporting of negative results and provide open-access to results to inform future research. The amount of information required and the format of the results, however, imposes a large extra workload at the end of studies on clinical trial units. In particular, the adverse-event-reporting component requires entering: each unique combination of treatment group and safety event; for every such event above, a further 4 pieces of information (body system, number of occurrences, number of subjects, number exposed) for non-serious events, plus an extra three pieces of data for serious adverse events (numbers of causally related events, deaths, causally related deaths). This package prepares the required statistics needed by EudraCT and formats them into the precise requirements to directly upload an XML file into the web portal, with no further data entry by hand.
Set of wrappers for the ncdf4 package to simplify and extend its reading/writing capabilities into/from multidimensional R arrays.
An implementation of the clustering methods of categorical data discussed in Amiri, S., Clarke, B., and Clarke, J. (2015). Clustering categorical data via ensembling dissimilarity matrices. Preprint <arXiv:1506.07930>.
Fit Bayesian (hierarchical) cognitive models using a linear modeling language interface using particle Metropolis Markov chain Monte Carlo sampling with Gibbs steps. The diffusion decision model (DDM), linear ballistic accumulator model (LBA), racing diffusion model (RDM), and the lognormal race model (LNR) are supported. Additionally, users can specify their own likelihood function and/or choose for non-hierarchical estimation, as well as for a diagonal, blocked or full multivariate normal group-level distribution to test individual differences. Prior specification is facilitated through methods that visualize the (implied) prior. A wide range of plotting functions assist in assessing model convergence and posterior inference. Models can be easily evaluated using functions that plot posterior predictions or using relative model comparison metrics such as information criteria or Bayes factors. References: Stevenson et al. (2024) <doi:10.31234/osf.io/2e4dq>.
Maximum likelihood estimation of an extended class of row-column (RC) association models for two-dimensional contingency tables, which are formulated by a condition of reduced rank on a matrix of extended association parameters; see Forcina (2019) <arXiv:1910.13848>. These parameters are defined by choosing the logit type for the row and column variables among four different options and a transformation derived from suitable divergence measures.
Chat with large language models from a range of providers including Claude <https://claude.ai>, OpenAI <https://chatgpt.com>, and more. Supports streaming, asynchronous calls, tool calling, and structured data extraction.
This package provides tools for integrated sensitivity analysis of evidence factors in observational studies. When an observational study allows for multiple independent or nearly independent inferences which, if vulnerable, are vulnerable to different biases, we have multiple evidence factors. This package provides methods that respect type I error rate control. Examples are provided of integrated evidence factors analysis in a longitudinal study with continuous outcome and in a case-control study. Karmakar, B., French, B., and Small, D. S. (2019)<DOI:10.1093/biomet/asz003>.
Computes the most important properties of four Bayesian early gating designs (two single arm and two randomized controlled designs), such as minimum required number of successes in the experimental group to make a GO decision, operating characteristics and average operating characteristics with respect to the sample size. These might aid in deciding what design to use for the early phase trial.
Treatments of a one-way layout, being equivalent to a control, can be selected with this package. Bonferroni adjusted "two one-sided t-tests" (TOST) and related simultaneous confidence intervals are given for both differences or ratios of means of normally distributed data. For the case of equal variances and balanced sample sizes for the treatment groups, the single-step procedure of Bofinger and Bofinger (1995) <doi:10.1111/j.2517-6161.1995.tb02058.x> can be chosen. For non-normal data, the Wilcoxon test is applied.
Streamlines the fitting of common Bayesian item response models using Stan.
Facilitates univariate and multivariate analysis of evolutionary sequences of phenotypic change. The package extends the modeling framework available in the paleoTS package. Please see <https://klvoje.github.io/evoTS/index.html> for information about the package and the implemented models.