Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Streamlines the process of updating changelogs (NEWS.md) and versioning R packages developed in git repositories.
Allows to estimate dynamic model averaging, dynamic model selection and median probability model. The original methods are implemented, as well as, selected further modifications of these methods. In particular the user might choose between recursive moment estimation and exponentially moving average for variance updating. Inclusion probabilities might be modified in a way using Google Trends'. The code is written in a way which minimises the computational burden (which is quite an obstacle for dynamic model averaging if many variables are used). For example, this package allows for parallel computations and Occam's window approach. The package is designed in a way that is hoped to be especially useful in economics and finance. Main reference: Raftery, A.E., Karny, M., Ettler, P. (2010) <doi:10.1198/TECH.2009.08104>.
Download flight and airport data from Brazilâ s Civil Aviation Agency (ANAC) <https://www.gov.br/anac/pt-br>. The data covers detailed information on aircraft, airports, and airport operations registered with ANAC. It also includes data on airfares, all international flights to and from Brazil, and domestic flights within the country.
This package provides a financial calculator that provides very fast implementations of common financial indicators using Rust code. It includes functions for bond-related indicators, such as yield to maturity ('YTM'), modified duration, and Macaulay duration, as well as functions for calculating time-weighted and money-weighted rates of return (using Modified Dietz method) for multiple portfolios, given their market values and profit and loss ('PnL') data. fcl is designed to be efficient and accurate for financial analysis and computation. The methods used in this package are based on the following references: <https://en.wikipedia.org/wiki/Modified_Dietz_method>, <https://en.wikipedia.org/wiki/Time-weighted_return>.
This package provides utilities to facilitate handling of Fude Polygon data downloadable from the Ministry of Agriculture, Forestry and Fisheries website <https://open.fude.maff.go.jp>.
The FAS package implements the bootstrap method for the tuning parameter selection and tuning-free inference on sparse regression coefficient vectors. Currently, the test could be applied to linear and factor-augmented sparse regressions, see Lederer & Vogt (2021, JMLR) <https://www.jmlr.org/papers/volume22/20-539/20-539.pdf> and Beyhum & Striaukas (2023) <arXiv:2307.13364>.
This package implements the h-likelihood estimation procedures for general frailty models including competing-risk models and joint models.
R API client package for Fingrid Open Data <https://data.fingrid.fi/> on the electricity market and the power system. get_data() function holds the main application logic to retrieve time-series data. API calls require free user account registration. Data is made available by Fingrid Oyj and distributed under Creative Commons 4.0 <https://creativecommons.org/licenses/by/4.0/>.
Package for parametric relative survival analyses. It allows to model non-linear and non-proportional effects and both non proportional and non linear effects, using splines (B-spline and truncated power basis), Weighted Cumulative Index of Exposure effect, with correction model for the life table. Both non proportional and non linear effects are described in Remontet, L. et al. (2007) <doi:10.1002/sim.2656> and Mahboubi, A. et al. (2011) <doi:10.1002/sim.4208>.
Estimates the probability matrix for the RÃ C Ecological Inference problem using the Expectation-Maximization Algorithm with four approximation methods for the E-Step, and an exact method as well. It also provides a bootstrap function to estimate the standard deviation of the estimated probabilities. In addition, it has functions that aggregate rows optimally to have more reliable estimates in cases of having few data points. For comparing the probability estimates of two groups, a Wald test routine is implemented. The library has data from the first round of the Chilean Presidential Election 2021 and can also generate synthetic election data. Methods described in Thraves, Charles; Ubilla, Pablo; Hermosilla, Daniel (2024) A Fast Ecological Inference Algorithm for the RÃ C case <doi:10.2139/ssrn.4832834>.
Implementation of two sample comparison procedures based on median-based statistical tests for functional data, introduced in Smida et al (2022) <doi:10.1080/10485252.2022.2064997>. Other competitive state-of-the-art approaches proposed by Chakraborty and Chaudhuri (2015) <doi:10.1093/biomet/asu072>, Horvath et al (2013) <doi:10.1111/j.1467-9868.2012.01032.x> or Cuevas et al (2004) <doi:10.1016/j.csda.2003.10.021> are also included in the package, as well as procedures to run test result comparisons and power analysis using simulations.
Forest data quality is a package containing nine methods of analysis for forest databases, from databases containing inventory data and growth models, the focus of the analyzes is related to the quality of the data present in the database with a focus on consistency , punctuality and completeness of data.
This package provides a suite of bootstrap-based models and tools for analyzing fish stocks and aquatic populations. Designed for ecologists and fisheries scientists, it supports data from length-frequency distributions, tag-and-recapture studies, and hard structure readings (e.g., otoliths). See Schwamborn et al., 2019 <doi:10.1016/j.ecolmodel.2018.12.001> for background. The package includes functions for bootstrapped fitting of growth curves and plotting.
Perform variable selection in settings with possibly missing data based on extrinsic (algorithm-specific) and intrinsic (population-level) variable importance. Uses a Super Learner ensemble to estimate the underlying prediction functions that give rise to estimates of variable importance. For more information about the methods, please see Williamson and Huang (2023+) <arXiv:2202.12989>.
Use Rmarkdown First method to build your package. Start your package with documentation, functions, examples and tests in the same unique file. Everything can be set from the Rmarkdown template file provided in your project, then inflated as a package. Inflating the template copies the relevant chunks and sections in the appropriate files required for package development.
This package performs analysis of variance testing procedures for univariate and multivariate functional data (Cuesta-Albertos and Febrero-Bande (2010) <doi:10.1007/s11749-010-0185-3>, Gorecki and Smaga (2015) <doi:10.1007/s00180-015-0555-0>, Gorecki and Smaga (2017) <doi:10.1080/02664763.2016.1247791>, Zhang et al. (2018) <doi:10.1016/j.csda.2018.05.004>).
Run three dimensional functional principal component analysis and return the three dimensional functional principal component scores. The details of the method are explained in Lin et al.(2015) <doi:10.1371/journal.pone.0132945>.
The Clutter model is a significant forest growth simulation tool. Grounded on individual trees and comprehensively considering factors such as competition among trees and the impact of environmental elements on growth, it can accurately reflect the growth process of forest stands. It can be applied in areas like forest resource management, harvesting planning, and ecological research. With the help of the Clutter model, people can better understand the dynamic changes of forests and provide a scientific basis for rational forest management and protecting the ecological environment. This R package can effectively realize the construction of forest growth and harvest models based on the Clutter model and achieve optimized forest management.References: Farias A, Soares C, Leite H et al(2021)<doi:10.1007/s10342-021-01380-1>. Guera O, Silva J, Ferreira R, et al(2019)<doi:10.1590/2179-8087.038117>.
Randomized clinical trials commonly follow participants for a time-to-event efficacy endpoint for a fixed period of time. Consequently, at the time when the last enrolled participant completes their follow-up, the number of observed endpoints is a random variable. Assuming data collected through an interim timepoint, simulation-based estimation and inferential procedures in the standard right-censored failure time analysis framework are conducted for the distribution of the number of endpoints--in total as well as by treatment arm--at the end of the follow-up period. The future (i.e., yet unobserved) enrollment, endpoint, and dropout times are generated according to mechanisms specified in the simTrial() function in the seqDesign package. A Bayesian model for the endpoint rate, offering the option to specify a robust mixture prior distribution, is used for generating future data (see the vignette for details). Inference can be restricted to participants who received treatment according to the protocol and are observed to be at risk for the endpoint at a specified timepoint. Plotting functions are provided for graphical display of results.
Construction and smart selection of Gaussian process models for analysis of computer experiments with emphasis on treatment of functional inputs that are regularly sampled. This package offers: (i) flexible modeling of functional-input regression problems through the fairly general Gaussian process model; (ii) built-in dimension reduction for functional inputs; (iii) heuristic optimization of the structural parameters of the model (e.g., active inputs, kernel function, type of distance). An in-depth tutorial in the use of funGp is provided in Betancourt et al. (2024) <doi:10.18637/jss.v109.i05> and Metamodeling background is provided in Betancourt et al. (2020) <doi:10.1016/j.ress.2020.106870>. The algorithm for structural parameter optimization is described in <https://hal.science/hal-02532713>.
Log-ratio Lasso regression for continuous, binary, and survival outcomes with (longitudinal) compositional features. See Fei and others (2024) <doi:10.1016/j.crmeth.2024.100899>.
This package provides three methods to generate fully-sequential space-filling designs inside a unit hypercube. A fully-sequential space-filling design means a sequence of nested designs (as the design size varies from one point up to some maximum number of points) with the design points added one at a time and such that the design at each size has good space-filling properties. Two methods target the minimum pairwise distance criterion and generate maximin designs, among which one method is more efficient when design size is large. One method targets the maximum hole size criterion and uses a heuristic to generate what is closer to a minimax design.
Efficient implementations of the algorithms in the Almost-Matching-Exactly framework for interpretable matching in causal inference. These algorithms match units via a learned, weighted Hamming distance that determines which covariates are more important to match on. For more information and examples, see the Almost-Matching-Exactly website.
Over sixty clustering algorithms are provided in this package with consistent input and output, which enables the user to try out algorithms swiftly. Additionally, 26 statistical approaches for the estimation of the number of clusters as well as the mirrored density plot (MD-plot) of clusterability are implemented. The packages is published in Thrun, M.C., Stier Q.: "Fundamental Clustering Algorithms Suite" (2021), SoftwareX, <DOI:10.1016/j.softx.2020.100642>. Moreover, the fundamental clustering problems suite (FCPS) offers a variety of clustering challenges any algorithm should handle when facing real world data, see Thrun, M.C., Ultsch A.: "Clustering Benchmark Datasets Exploiting the Fundamental Clustering Problems" (2020), Data in Brief, <DOI:10.1016/j.dib.2020.105501>.