Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions and data to accompany the 5th edition of the book "Applied Nonparametric Statistical Methods" (4th edition: Sprent & Smeeton, 2024, ISBN:158488701X), the revisions from the 4th edition including a move from describing the output from a miscellany of statistical software packages to using R. While the output from many of the functions can also be obtained using a range of other R functions, this package provides functions in a unified setting and give output using both p-values and confidence intervals, exemplifying the book's approach of treating p-values as a guide to statistical importance and not an end product in their own right. Please note that in creating the ANSM5 package we do not claim to have produced software which is necessarily the most computationally efficient nor the most comprehensive.
An R wrapper for agena.ai <https://www.agena.ai> which provides users capabilities to work with agena.ai using the R environment. Users can create Bayesian network models from scratch or import existing models in R and export to agena.ai cloud or local API for calculations. Note: running calculations requires a valid agena.ai API license (past the initial trial period of the local API).
Filters animal satellite tracking data obtained from the Argos system(<https://www.argos-system.org/>), following the algorithm described in Freitas et al (2008) <doi:10.1111/j.1748-7692.2007.00180.x>. It is especially indicated for telemetry studies of marine animals, where Argos locations are predominantly of low-quality.
Routines for astrochronologic testing, astronomical time scale construction, and time series analysis <doi:10.1016/j.earscirev.2018.11.015>. Also included are a range of statistical analysis and modeling routines that are relevant to time scale development and paleoclimate analysis.
This package provides tools to read/write/publish metadata based on the Atom XML syndication format. This includes support of Dublin Core XML implementation, and a client to API(s) implementing the AtomPub - SWORD API specification.
Functionality to allow users to easily colour plots with the colour palettes of various academic institutions.
This package implements Bayesian estimation and inference for alpha-mixture survival models, including Weibull and Exponential based components, with tools for simulation and posterior summaries. The methods target applications in reliability and biomedical survival analysis. The package implements Bayesian estimation for the alpha-mixture methodology introduced in Asadi et al. (2019) <doi:10.1017/jpr.2019.72>.
Wraps the Abseil C++ library for use by R packages. Original files are from <https://github.com/abseil/abseil-cpp>. Patches are located at <https://github.com/doccstat/abseil-r/tree/main/local/patches>.
This package provides a tool that "multiply imputes" missing data in a single cross-section (such as a survey), from a time series (like variables collected for each year in a country), or from a time-series-cross-sectional data set (such as collected by years for each of several countries). Amelia II implements our bootstrapping-based algorithm that gives essentially the same answers as the standard IP or EMis approaches, is usually considerably faster than existing approaches and can handle many more variables. Unlike Amelia I and other statistically rigorous imputation software, it virtually never crashes (but please let us know if you find to the contrary!). The program also generalizes existing approaches by allowing for trends in time series across observations within a cross-sectional unit, as well as priors that allow experts to incorporate beliefs they have about the values of missing cells in their data. Amelia II also includes useful diagnostics of the fit of multiple imputation models. The program works from the R command line or via a graphical user interface that does not require users to know R.
Statistical procedures to perform stability analysis in plant breeding and to identify stable genotypes under diverse environments. It is possible to calculate coefficient of homeostaticity by Khangildin et al. (1979), variance of specific adaptive ability by Kilchevsky&Khotyleva (1989), weighted homeostaticity index by Martynov (1990), steadiness of stability index by Udachin (1990), superiority measure by Lin&Binn (1988) <doi:10.4141/cjps88-018>, regression on environmental index by Erberhart&Rassel (1966) <doi:10.2135/cropsci1966.0011183X000600010011x>, Tai's (1971) stability parameters <doi:10.2135/cropsci1971.0011183X001100020006x>, stability variance by Shukla (1972) <doi:10.1038/hdy.1972.87>, ecovalence by Wricke (1962), nonparametric stability parameters by Nassar&Huehn (1987) <doi:10.2307/2531947>, Francis&Kannenberg's parameters of stability (1978) <doi:10.4141/cjps78-157>.
This package provides capabilities to process Apache HTTPD Log files.The main functionalities are to extract data from access and error log files to data frames.
Data from the anxiety and confinement study from Alvarado-Aravena et al. (2022) <doi:10.3390/bs12100398>.
These dataset contains daily quality air measurements in Spain over a period of 18 years (from 2001 to 2018). The measurements refer to several pollutants. These data are openly published by the Government of Spain. The datasets were originally spread over a number of files and formats. Here, the same information is contained in simple dataframe for convenience of researches, journalists or general public. See the Spanish Government website <http://www.miteco.gob.es/> for more information.
Loss reserving generally focuses on identifying a single model that can generate superior predictive performance. However, different loss reserving models specialise in capturing different aspects of loss data. This is recognised in practice in the sense that results from different models are often considered, and sometimes combined. For instance, actuaries may take a weighted average of the prediction outcomes from various loss reserving models, often based on subjective assessments. This package allows for the use of a systematic framework to objectively combine (i.e. ensemble) multiple stochastic loss reserving models such that the strengths offered by different models can be utilised effectively. Our framework is developed in Avanzi et al. (2023). Firstly, our criteria model combination considers the full distributional properties of the ensemble and not just the central estimate - which is of particular importance in the reserving context. Secondly, our framework is that it is tailored for the features inherent to reserving data. These include, for instance, accident, development, calendar, and claim maturity effects. Crucially, the relative importance and scarcity of data across accident periods renders the problem distinct from the traditional ensemble techniques in statistical learning. Our framework is illustrated with a complex synthetic dataset. In the results, the optimised ensemble outperforms both (i) traditional model selection strategies, and (ii) an equally weighted ensemble. In particular, the improvement occurs not only with central estimates but also relevant quantiles, such as the 75th percentile of reserves (typically of interest to both insurers and regulators). Reference: Avanzi B, Li Y, Wong B, Xian A (2023) "Ensemble distributional forecasting for insurance loss reserving" <doi:10.48550/arXiv.2206.08541>.
Flexible parametric Accelerated Hazards (AH) regression models in overall and relative survival frameworks with 13 distinct Baseline Distributions. The AH Model can also be applied to lifetime data with crossed survival curves. Any user-defined parametric distribution can be fitted, given at least an R function defining the cumulative hazard and hazard rate functions. See Chen and Wang (2000) <doi:10.1080/01621459.2000.10474236>, and Lee (2015) <doi:10.1007/s10985-015-9349-5> for more details.
This package implements wavelet-based approaches for describing population admixture. Principal Components Analysis (PCA) is used to define the population structure and produce a localized admixture signal for each individual. Wavelet summaries of the PCA output describe variation present in the data and can be related to population-level demographic processes. For more details, see J Sanderson, H Sudoyo, TM Karafet, MF Hammer and MP Cox. 2015. Reconstructing past admixture processes from local genomic ancestry using wavelet transformation. Genetics 200:469-481 <doi:10.1534/genetics.115.176842>.
Describes a series first. After that does time series analysis using one hybrid model and two specially structured Machine Learning (ML) (Artificial Neural Network or ANN and Support Vector Regression or SVR) models. More information can be obtained from Paul and Garai (2022) <doi:10.1007/s41096-022-00128-3>.
This package performs AnchorRegression proposed by Rothenhäusler et al. 2020. The code is adapted from the original paper repository. (<https://github.com/rothenhaeusler/anchor-regression>) The code was developed independently from the authors of the paper.
Fits a model to adjust and consider additional variations in three dimensions of age groups, time, and space on residuals excluded from a prediction model that have residual such as: linear regression, mixed model and so on. Details are given in Foreman et al. (2015) <doi:10.1186/1478-7954-10-1>.
Using this package, you can fit a random effects model using either the hierarchical credibility model, a combination of the hierarchical credibility model with a generalized linear model or a Tweedie generalized linear mixed model. See Campo, B.D.C. and Antonio, K. (2023) <doi:10.1080/03461238.2022.2161413>.
Extraction, preparation, visualisation and analysis of TERN AusPlots ecosystem monitoring data. Direct access to plot-based data on vegetation and soils across Australia, including physical sample barcode numbers. Simple function calls extract the data and merge them into species occurrence matrices for downstream analysis, or calculate things like basal area and fractional cover. TERN AusPlots is a national field plot-based ecosystem surveillance monitoring method and dataset for Australia. The data have been collected across a national network of plots and transects by the Terrestrial Ecosystem Research Network (TERN - <https://www.tern.org.au>), an Australian Government NCRIS-enabled project, and its Ecosystem Surveillance platform (<https://www.tern.org.au/tern-land-observatory/ecosystem-surveillance-and-environmental-monitoring/>).
This package implements a constrained version of hierarchical agglomerative clustering, in which each observation is associated to a position, and only adjacent clusters can be merged. Typical application fields in bioinformatics include Genome-Wide Association Studies or Hi-C data analysis, where the similarity between items is a decreasing function of their genomic distance. Taking advantage of this feature, the implemented algorithm is time and memory efficient. This algorithm is described in Ambroise et al (2019) <doi:10.1186/s13015-019-0157-4>.
This package provides a unified and straightforward interface for performing a variety of meta-analysis methods directly from user data. Users can input a data frame, specify key parameters, and effortlessly execute and compare multiple common meta-analytic models. Designed for immediate usability, the package facilitates transparent, reproducible research without manual implementation of each analytical method. Ideal for researchers aiming for efficiency and reproducibility, it streamlines workflows from data preparation to results interpretation.
Anytime-valid sequential estimation of the p-value of a test calibrated by Monte-Carlo simulation, as described in Stoepker & Castro (2024) <doi:10.48550/arXiv.2409.18908>.