Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Interact with the Attentional Control Data Collection (ACDC). Connect to the database via connect_to_db(), set filter arguments via add_argument() and query the database via query_db().
Adaptive Sparse Multi-block Partial Least Square, a supervised algorithm, is an extension of the Sparse Multi-block Partial Least Square, which allows different quantiles to be used in different blocks of different partial least square components to decide the proportion of features to be retained. The best combinations of quantiles can be chosen from a set of user-defined quantiles combinations by cross-validation. By doing this, it enables us to do the feature selection for different blocks, and the selected features can then be further used to predict the outcome. For example, in biomedical applications, clinical covariates plus different types of omics data such as microbiome, metabolome, mRNA data, methylation data, copy number variation data might be predictive for patients outcome such as survival time or response to therapy. Different types of data could be put in different blocks and along with survival time to fit the model. The fitted model can then be used to predict the survival for the new samples with the corresponding clinical covariates and omics data. In addition, Adaptive Sparse Multi-block Partial Least Square Discriminant Analysis is also included, which extends Adaptive Sparse Multi-block Partial Least Square for classifying the categorical outcome.
We provide tools to estimate two prediction accuracy metrics, the average positive predictive values (AP) as well as the well-known AUC (the area under the receiver operator characteristic curve) for risk scores. The outcome of interest is either binary or censored event time. Note that for censored event time, our functions estimates, the AP and the AUC, are time-dependent for pre-specified time interval(s). A function that compares the APs of two risk scores/markers is also included. Optional outputs include positive predictive values and true positive fractions at the specified marker cut-off values, and a plot of the time-dependent AP versus time (available for event time data).
Annuity Random Interest Rates proposes different techniques for the approximation of the present and final value of a unitary annuity-due or annuity-immediate considering interest rate as a random variable. Cruz Rambaud et al. (2017) <doi:10.1007/978-3-319-54819-7_16>. Cruz Rambaud et al. (2015) <doi:10.23755/rm.v28i1.25>.
Developed to perform the tasks given by the following. 1-computing the probability density function and distribution function of a univariate stable distribution; 2- generating from univariate stable, truncated stable, multivariate elliptically contoured stable, and bivariate strictly stable distributions; 3- estimating the parameters of univariate symmetric stable, skew stable, Cauchy, multivariate elliptically contoured stable, and multivariate strictly stable distributions; 4- estimating the parameters of the mixture of symmetric stable and mixture of Cauchy distributions.
Client for AWS Transcribe <https://aws.amazon.com/documentation/transcribe>, a cloud transcription service that can convert an audio media file in English and other languages into a text transcript.
This package provides functions to simplify and standardise antimicrobial resistance (AMR) data analysis and to work with microbial and antimicrobial properties by using evidence-based methods, as described in <doi:10.18637/jss.v104.i03>.
Designed to help health economic modellers when building and reviewing models. The visualisation functions allow users to more easily review the network of functions in a project, and get lay summaries of them. The asserts included are intended to check for common errors, thereby freeing up time for modellers to focus on tests specific to the individual model in development or review. For more details see Smith and colleagues (2024)<doi:10.12688/wellcomeopenres.23180.1>.
The functions defined in this program serve for implementing adaptive two-stage tests. Currently, four tests are included: Bauer and Koehne (1994), Lehmacher and Wassmer (1999), Vandemeulebroecke (2006), and the horizontal conditional error function. User-defined tests can also be implemented. Reference: Vandemeulebroecke, An investigation of two-stage tests, Statistica Sinica 2006.
This package provides tools for estimating length-based indicators from length frequency data to assess fish stock status and manage fisheries sustainably. Implements methods from Cope and Punt (2009) <doi:10.1577/C08-025.1> for data-limited stock assessment and Froese (2004) <doi:10.1111/j.1467-2979.2004.00144.x> for detecting overfishing using simple indicators. Key functions include: FrequencyTable(): Calculate the frequency table from the collected and also the extract the length frequency data from the frequency table with the upper length_range. A numeric value specifying the bin width for class intervals. If not provided, the bin width is automatically calculated using Wang (2020) <doi:10.1016/j.fishres.2019.105474> formula. FreqTM(): Creates a frequency distribution table for fish length data across multiple months using a consistent length class structure. The bin width is determined by either a custom value or Wang's formula, applied uniformly across all months. The function dynamically detects and renames columns to Month and Length from the input dataframe. The maximum observed length is included as part of the last class, with the upper bound set to the smallest multiple of the bin width greater than or equal to the maximum length. Months can be converted to dates using a configurable day and year, with dates assigned sequentially in day.month.year format (e.g., 15.01.26). FishPar(): Calculates length-based indicators (LBIs) proposed by Froese (2004) <doi:10.1111/j.1467-2979.2004.00144.x> such as the percentage of mature fish (Pmat), percentage of optimal length fish (Popt), percentage of mega spawners (Pmega), and the sum of these as Pobj. This function also estimates confidence intervals for different lengths, visualizes length frequency distributions, and provides data frames containing calculated values. FishSS(): Makes decisions based on input from Cope and Punt (2009) <doi:10.1577/C08-025.1> and parameters calculated by FishPar() (e.g., Pobj, Pmat, Popt, LM_ratio) to determine stock status as target spawning biomass (TSB40) and limit spawning biomass (LSB25), and selectivity. LWR(): Fits and visualizes length-weight relationships using linear regression, with options for log-transformation and customizable plotting.
Simulate clinical trials for diagnostic test devices and evaluate the operating characteristics under an adaptive design with futility assessment determined via the posterior predictive probabilities.
This package provides a powerful tool for automating the early detection of disease outbreaks in time series data. aeddo employs advanced statistical methods, including hierarchical models, in an innovative manner to effectively characterize outbreak signals. It is particularly useful for epidemiologists, public health professionals, and researchers seeking to identify and respond to disease outbreaks in a timely fashion. For a detailed reference on hierarchical models, consult Henrik Madsen and Poul Thyregod's book (2011), ISBN: 9781420091557.
Runs projections of groups of matrix projection models (MPMs), allowing density dependence mechanisms to work across MPMs. This package was developed to run both adaptive dynamics simulations such as pairwise and multiple invasibility analyses, and community projections in which species are represented by MPMs. All forms of MPMs are allowed, including integral projection models (IPMs). Also includes individual-based modeling (IBM) versions of these.
This package provides tools to perform model selection alongside estimation under Linear, Logistic, Negative binomial, Quantile, and Skew-Normal regression. Under the spike-and-slab method, a probability for each possible model is estimated with the posterior mean, credibility interval, and standard deviation of coefficients and parameters under the most probable model.
Sets the alpha level for coefficients in a regression model as a decreasing function of the sample size through the use of Jeffreys Approximate Bayes factor. You tell alphaN() your sample size, and it tells you to which value you must lower alpha to avoid Lindley's Paradox. For details, see Wulff and Taylor (2024) <doi:10.1177/14761270231214429>.
Assists the evaluation of whether and where to focus code optimization, using Amdahl's law and visual aids based on line profiling. Amdahl's profiler organizes profiling output files (including memory profiling) in a visually appealing way. It is meant to help to balance development vs. execution time by helping to identify the most promising sections of code to optimize and projecting potential gains. The package is an addition to R's standard profiling tools and is not a wrapper for them.
Adjusts output of cranlogs package to account for CRAN'-wide daily automated downloads and re-downloads caused by package updates.
Offers a set of functions to easily make predictions for univariate time series. autoTS is a wrapper of existing functions of the forecast and prophet packages, harmonising their outputs in tidy dataframes and using default values for each. The core function getBestModel() allows the user to effortlessly benchmark seven algorithms along with a bagged estimator to identify which one performs the best for a given time series.
Queries multiple resources authors HGNC (2019) <https://www.genenames.org>, authors limma (2015) <doi:10.1093/nar/gkv007> to find the correspondence between evolving nomenclature of human gene symbols, aliases, previous symbols or synonyms with stable, curated gene entrezID from NCBI database. This allows fast, accurate and up-to-date correspondence between human gene expression datasets from various date and platform (e.g: gene symbol: BRCA1 - ID: 672).
Addressing measurement error in covariates and misclassification in binary outcome variables within causal inference, the ATE.ERROR package implements inverse probability weighted estimation methods proposed by Shu and Yi (2017, <doi:10.1177/0962280217743777>; 2019, <doi:10.1002/sim.8073>). These methods correct errors to accurately estimate average treatment effects (ATE). The package includes two main functions: ATE.ERROR.Y() for handling misclassification in the outcome variable and ATE.ERROR.XY() for correcting both outcome misclassification and covariate measurement error. It employs logistic regression for treatment assignment and uses bootstrap sampling to calculate standard errors and confidence intervals, with simulated datasets provided for practical demonstration.
This package creates complex autoregressive distributed lag (ARDL) models and constructs the underlying unrestricted and restricted error correction model (ECM) automatically, just by providing the order. It also performs the bounds-test for cointegration as described in Pesaran et al. (2001) <doi:10.1002/jae.616> and provides the multipliers and the cointegrating equation. The validity and the accuracy of this package have been verified by successfully replicating the results of Pesaran et al. (2001) in Natsiopoulos and Tzeremes (2022) <doi:10.1002/jae.2919>.
Assists in automating the selection of terms to include in mixed models when asreml is used to fit the models. Procedures are available for choosing models that conform to the hierarchy or marginality principle, for fitting and choosing between two-dimensional spatial models using correlation, natural cubic smoothing spline and P-spline models. A history of the fitting of a sequence of models is kept in a data frame. Also used to compute functions and contrasts of, to investigate differences between and to plot predictions obtained using any model fitting function. The content falls into the following natural groupings: (i) Data, (ii) Model modification functions, (iii) Model selection and description functions, (iv) Model diagnostics and simulation functions, (v) Prediction production and presentation functions, (vi) Response transformation functions, (vii) Object manipulation functions, and (viii) Miscellaneous functions (for further details see asremlPlus-package in help). The asreml package provides a computationally efficient algorithm for fitting a wide range of linear mixed models using Residual Maximum Likelihood. It is a commercial package and a license for it can be purchased from VSNi <https://vsni.co.uk/> as asreml-R', who will supply a zip file for local installation/updating (see <https://asreml.kb.vsni.co.uk/>). It is not needed for functions that are methods for alldiffs and data.frame objects. The package asremPlus can also be installed from <http://chris.brien.name/rpackages/>.
This package provides a high-performance, flexible and extensible framework to develop continuous-time agent based models. Its high performance allows it to simulate millions of agents efficiently. Agents are defined by their states (arbitrary R lists). The events are handled in chronological order. This avoids the multi-event interaction problem in a time step of discrete-time simulations, and gives precise outcomes. The states are modified by provided or user-defined events. The framework provides a flexible and customizable implementation of state transitions (either spontaneous or caused by agent interactions), making the framework suitable to apply to epidemiology and ecology, e.g., to model life history stages, competition and cooperation, and disease and information spread. The agent interactions are flexible and extensible. The framework provides random mixing and network interactions, and supports multi-level mixing patterns. It can be easily extended to other interactions such as inter- and intra-households (or workplaces and schools) by subclassing an R6 class. It can be used to study the effect of age-specific, group-specific, and contact- specific intervention strategies, and complex interactions between individual behavior and population dynamics. This modeling concept can also be used in business, economical and political models. As a generic event based framework, it can be applied to many other fields. More information about the implementation and examples can be found at <https://github.com/junlingm/ABM>.
Interface package for sala', the spatial network analysis library from the depthmapX software application. The R parts of the code are based on the rdepthmap package. Allows for the analysis of urban and building-scale networks and provides metrics and methods usually found within the Space Syntax domain. Methods in this package are described by K. Al-Sayed, A. Turner, B. Hillier, S. Iida and A. Penn (2014) "Space Syntax methodology", and also by A. Turner (2004) <https://discovery.ucl.ac.uk/id/eprint/2651> "Depthmap 4: a researcher's handbook".