Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions for covariance matrix comparisons, estimation of repeatabilities in measurements and matrices, and general evolutionary quantitative genetics tools. Melo D, Garcia G, Hubbe A, Assis A P, Marroig G. (2016) <doi:10.12688/f1000research.7082.3>.
Set of functions to keep track and find objects in user-defined environments by identifying environments by name --which cannot be retrieved with the built-in function environmentName(). The package also provides functionality to obtain simplified information about function calling chains and to get an object's memory address.
This package provides functions to compute state-specific and marginal life expectancies. The computation is based on a fitted continuous-time multi-state model that includes an absorbing death state; see Van den Hout (2017, ISBN:9781466568402). The fitted multi-state model model should be estimated using the msm package using age as the time-scale.
Connect to Elasticsearch and OpenSearch', NoSQL databases built on the Java Virtual Machine and using the Apache Lucene library. Interacts with the Elasticsearch HTTP API (<https://www.elastic.co/elasticsearch/>) and the OpenSearch HTTP API (<https://opensearch.org/>). Includes functions for setting connection details to Elasticsearch and OpenSearch instances, loading bulk data, searching for documents with both HTTP query variables and JSON based body requests. In addition, elastic provides functions for interacting with APIs for indices', documents, nodes, clusters, an interface to the cat API, and more.
Routines for combining causal effect estimates and study diagnostics across multiple data sites in a distributed study, without sharing patient-level data. Allows for normal and non-normal approximations of the data-site likelihood of the effect parameter.
The R package proposes extreme value index estimators for heavy tailed models by mean of order p <DOI:10.1016/j.csda.2012.07.019>, peaks over random threshold <DOI:10.57805/revstat.v4i3.37> and a bias-reduced estimator <DOI:10.1080/00949655.2010.547196>. The package also computes moment, generalised Hill <DOI:10.2307/3318416> and mixed moment estimates for the extreme value index. High quantiles and value at risk estimators based on these estimators are implemented.
Generate citations and references for R packages from CRAN or Bioconductor. Supports RIS and BibTeX formats with automatic DOI retrieval from GitHub repositories and published papers. Includes command-line interface for batch processing.
Estimate a total causal effect from observational data under linearity and causal sufficiency. The observational data is supposed to be generated from a linear structural equation model (SEM) with independent and additive noise. The underlying causal DAG associated the SEM is required to be known up to a maximally oriented partially directed graph (MPDAG), which is a general class of graphs consisting of both directed and undirected edges, including CPDAGs (i.e., essential graphs) and DAGs. Such graphs are usually obtained with structure learning algorithms with added background knowledge. The program is able to estimate every identified effect, including single and multiple treatment variables. Moreover, the resulting estimate has the minimal asymptotic covariance (and hence shortest confidence intervals) among all estimators that are based on the sample covariance.
Endpoint selection and sample size reassessment for multiple binary endpoints based on blinded and/or unblinded data. Trial design that allows an adaptive modification of the primary endpoint based on blinded information obtained at an interim analysis. The decision rule chooses the endpoint with the lower estimated required sample size. Additionally, the sample size is reassessed using the estimated event probabilities and correlation between endpoints. The implemented design is proposed in Bofill Roig, M., Gómez Melis, G., Posch, M., and Koenig, F. (2022). <doi:10.48550/arXiv.2206.09639>.
This package provides a data package containing a database of epidemiological parameters. It stores the data for the epiparameter R package. Epidemiological parameter estimates are extracted from the literature.
This package provides empirical likelihood-based methods for the inference of variance components in linear mixed-effects models.
This package provides functions for the Bayesian analysis of extreme value models, using Markov chain Monte Carlo methods. Allows the construction of both uninformative and informed prior distributions for common statistical models applied to extreme event data, including the generalized extreme value distribution.
Please note: active development has moved to packages validate and errorlocate'. Facilitates reading and manipulating (multivariate) data restrictions (edit rules) on numerical and categorical data. Rules can be defined with common R syntax and parsed to an internal (matrix-like format). Rules can be manipulated with variable elimination and value substitution methods, allowing for feasibility checks and more. Data can be tested against the rules and erroneous fields can be found based on Fellegi and Holt's generalized principle. Rules dependencies can be visualized with using the igraph package.
This package provides functions for assigning Clarke or Parkes (Consensus) error grid zones to blood glucose values, and for plotting both types of error grids in both mg/mL and mmol/L units.
In agricultural, post-harvest and processing, engineering and industrial experiments factors are often differentiated with ease with which they can change from experimental run to experimental run. This is due to the fact that one or more factors may be expensive or time consuming to change i.e. hard-to-change factors. These factors restrict the use of complete randomization as it may make the experiment expensive and time consuming. Split plot designs can be used for such situations. In general model estimation of split plot designs require the use of generalized least squares (GLS). However for some split-plot designs ordinary least squares (OLS) estimates are equivalent to generalized least squares (GLS) estimates. These types of designs are known in literature as equivalent-estimation split-plot design. For method details see, Macharia, H. and Goos, P.(2010) <doi:10.1080/00224065.2010.11917833>.Balanced split plot designs are designs which have an equal number of subplots within every whole plot. This package used to construct equivalent estimation balanced split plot designs for different experimental set ups along with different statistical criteria to measure the performance of these designs. It consist of the function equivalent_BSPD().
Instead of counting observations before and after a subset() call, the ExclusionTable() function reports the number before and after each subset() call together with the number of observations that have been excluded. This is especially useful in observational studies for keeping track how many observations have been excluded for each in-/ or exclusion criteria. You just need to provide ExclusionTable() with a dataset and a list of logical filter statements.
An implementation of European Forestry Dynamics Model (EFDM) and an estimation algorithm for the transition probabilities. The EFDM is a large-scale forest model that simulates the development of the forest and estimates volume of wood harvested for any given forested area. This estimate can be broken down by, for example, species, site quality, management regime and ownership category. See Packalen et al. (2015) <doi:10.2788/153990>.
Calculate and analyze household energy burden using the Net Energy Return aggregation methodology. Functions support weighted statistical calculations across geographic and demographic cohorts, with utilities for formatting results into publication-ready tables. Methods are based on Scheier & Kittner (2022) <doi:10.1038/s41467-021-27673-y>.
This package creates simple or stacked epidemic curves for hourly, daily, weekly or monthly outcome data.
This package provides functions for easy building of error correction models (ECM) for time series regression.
Four ensemble-based methods (SMOTEBoost, RUSBoost, UnderBagging, and SMOTEBagging) for class imbalance problem are implemented for binary classification. Such methods adopt ensemble methods and data re-sampling techniques to improve model performance in presence of class imbalance problem. One special feature offers the possibility to choose multiple supervised learning algorithms to build weak learners within ensemble models. References: Nitesh V. Chawla, Aleksandar Lazarevic, Lawrence O. Hall, and Kevin W. Bowyer (2003) <doi:10.1007/978-3-540-39804-2_12>, Chris Seiffert, Taghi M. Khoshgoftaar, Jason Van Hulse, and Amri Napolitano (2010) <doi:10.1109/TSMCA.2009.2029559>, R. Barandela, J. S. Sanchez, R. M. Valdovinos (2003) <doi:10.1007/s10044-003-0192-z>, Shuo Wang and Xin Yao (2009) <doi:10.1109/CIDM.2009.4938667>, Yoav Freund and Robert E. Schapire (1997) <doi:10.1006/jcss.1997.1504>.
Implementation of Energy Trees, a statistical model to perform classification and regression with structured and mixed-type data. The model has a similar structure to Conditional Trees, but brings in Energy Statistics to test independence between variables that are possibly structured and of different nature. Currently, the package covers functions and graphs as structured covariates. It builds upon partykit to provide functionalities for fitting, printing, plotting, and predicting with Energy Trees. Energy Trees are described in Giubilei et al. (2022) <arXiv:2207.04430>.
Comprehensive toolkit for addressing selection bias in binary disease models across diverse non-probability samples, each with unique selection mechanisms. It utilizes Inverse Probability Weighting (IPW) and Augmented Inverse Probability Weighting (AIPW) methods to reduce selection bias effectively in multiple non-probability cohorts by integrating data from either individual-level or summary-level external sources. The package also provides a variety of variance estimation techniques. Please refer to Kundu et al. <doi:10.48550/arXiv.2412.00228>.
An approach and software for modelling marine and freshwater ecosystems. It is articulated entirely around trophic levels. EcoTroph's key displays are bivariate plots, with trophic levels as the abscissa, and biomass flows or related quantities as ordinates. Thus, trophic ecosystem functioning can be modelled as a continuous flow of biomass surging up the food web, from lower to higher trophic levels, due to predation and ontogenic processes. Such an approach, wherein species as such disappear, may be viewed as the ultimate stage in the use of the trophic level metric for ecosystem modelling, providing a simplified but potentially useful caricature of ecosystem functioning and impacts of fishing. This version contains catch trophic spectrum analysis (CTSA) function and corrected versions of the mf.diagnosis and create.ETmain functions.