Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Fit models of modularity to morphological landmarks. Perform model selection on results. Fit models with a single within-module correlation or with separate within-module correlations fitted to each module.
Digital simulation of electrochemical processes. Each function allows for implicit and explicit solution of the differential equation using methods like Euler, Backwards implicit, Runge Kutta 4, Crank Nicholson and Backward differentiation formula as well as different number of points for derivative approximation. Several electrochemical processes can be simulated such as: Chronoamperometry, Potential Step, Linear Sweep, Cyclic Voltammetry, Cyclic Voltammetry with electrochemical reaction followed by chemical reaction (EC mechanism) and CV with two following electrochemical reaction (EE mechanism). In update 1.1.0 has been added a general purpose CV function that allow to simulate up to 4 EE mechanism combined with chemical reaction for each species.Update 1.2.0 improved the accuracy of the measurements and allow personalized data resolution for simulation. Bibliography regarding this methods can be found in the following texts. Dieter Britz, Jorg Strutwolf (2016) <ISBN:978-3-319-30292-8>. Allen J. Bard, Larry R. Faulkner (2000) <ISBN:978-0-471-04372-0>.
Function and data sets in the book entitled "R ile Temel Ekonometri", S.Guris, E.C.Akay, B. Guris(2020). The book published in Turkish. It is possible to makes Durbin two stage method for autocorrelation, generalized differencing method for correction autocorrelation, Hausman Test for identification and computes LM, LR and Wald test statistics for redundant variable by using the functions written in this package.
This package provides a toolbox to make it easy to analyze plant disease epidemics. It provides a common framework for plant disease intensity data recorded over time and/or space. Implemented statistical methods are currently mainly focused on spatial pattern analysis (e.g., aggregation indices, Taylor and binary power laws, distribution fitting, SADIE and mapcomp methods). See Laurence V. Madden, Gareth Hughes, Franck van den Bosch (2007) <doi:10.1094/9780890545058> for further information on these methods. Several data sets that were mainly published in plant disease epidemiology literature are also included in this package.
There is no ophthalmic researcher who has not had headaches from the handling of visual acuity entries. Different notations, untidy entries. This shall now be a matter of the past. Eye makes it as easy as pie to work with VA data - easy cleaning, easy conversion between Snellen, logMAR, ETDRS letters, and qualitative visual acuity shall never pester you again. The eye package automates the pesky task to count number of patients and eyes, and can help to clean data with easy re-coding for right and left eyes. It also contains functions to help reshaping eye side specific variables between wide and long format. Visual acuity conversion is based on Schulze-Bonsel et al. (2006) <doi:10.1167/iovs.05-0981>, Gregori et al. (2010) <doi:10.1097/iae.0b013e3181d87e04>, Beck et al. (2003) <doi:10.1016/s0002-9394(02)01825-1> and Bach (2007) <https://michaelbach.de/sci/acuity.html>.
Replication methods to compute some basic statistic operations (means, standard deviations, frequency tables, percentiles, mean comparisons using weighted effect coding, generalized linear models, and linear multilevel models) in complex survey designs comprising multiple imputed or nested imputed variables and/or a clustered sampling structure which both deserve special procedures at least in estimating standard errors. See the package documentation for a more detailed description along with references.
Facilitates univariate and multivariate analysis of evolutionary sequences of phenotypic change. The package extends the modeling framework available in the paleoTS package. Please see <https://klvoje.github.io/evoTS/index.html> for information about the package and the implemented models.
The Economic Policy Institute (<https://www.epi.org/>) provides researchers, media, and the public with easily accessible, up-to-date, and comprehensive historical data on the American labor force. It is compiled from Economic Policy Institute analysis of government data sources. Use it to research wages, inequality, and other economic indicators over time and among demographic groups. Data is usually updated monthly.
Compute common data quality metrics for accuracy, precision and data loss for screen-based eye trackers. Supports input data both in pixels on the screen and in degrees, output measures are (where appropriate) expressed as angles in degrees.
Multivariate modeling of data after deflation of interfering effects. EF Mosleth et al. (2021) <doi:10.1038/s41598-021-82388-w> and EF Mosleth et al. (2020) <doi:10.1016/B978-0-12-409547-2.14882-6>.
Detects sustained change in digital bio-marker data using simultaneous confidence bands. Accounts for noise using an auto-regressive model. Based on Buehlmann (1998) "Sieve bootstrap for smoothing in nonstationary time series" <doi:10.1214/aos/1030563978>.
Data sets from the book "Forecasting with exponential smoothing: the state space approach" by Hyndman, Koehler, Ord and Snyder (Springer, 2008).
Reads European Data Format files EDF and EDF+, see <http://www.edfplus.info>, BioSemi Data Format files BDF, see <http://www.biosemi.com/faq/file_format.htm>, and BDF+ files, see <http://www.teuniz.net/edfbrowser/bdfplus%20format%20description.html>. The files are read in two steps: first the header is read and then the signals (using the header object as a parameter).
Miscellaneous functions for data cleaning and data analysis of educational assessments. Includes functions for descriptive analyses, character vector manipulations and weighted statistics. Mainly a lightweight dependency for the packages eatRep', eatGADS', eatPrep and eatModel (which will be subsequently submitted to CRAN'). The function for defining (weighted) contrasts in weighted effect coding refers to te Grotenhuis et al. (2017) <doi:10.1007/s00038-016-0901-1>. Functions for weighted statistics refer to Wolter (2007) <doi:10.1007/978-0-387-35099-8>.
Provide an optimal histogram, in the sense of probability density estimation and features detection, by means of multiscale variational inference. In other words, the resulting histogram servers as an optimal density estimator, and meanwhile recovers the features, such as increases or modes, with both false positive and false negative controls. Moreover, it provides a parsimonious representation in terms of the number of blocks, which simplifies data interpretation. The only assumption for the method is that data points are independent and identically distributed, so it applies to fairly general situations, including continuous distributions, discrete distributions, and mixtures of both. For details see Li, Munk, Sieling and Walther (2016) <arXiv:1612.07216>.
Download data from the European Social Survey directly from their website <http://www.europeansocialsurvey.org/>. There are two families of functions that allow you to download and interactively check all countries and rounds available.
This package provides methods and utilities for causal emergence. Used to explore and compute various information theory metrics for networks, such as effective information, effectiveness and causal emergence.
Estimation of unknown historical or archaeological dates subject to relationships with other relative dates and absolute constraints, derived as marginal densities from the full joint conditional, using a two-stage Gibbs sampler with consistent batch means to assess convergence. Features reporting on Monte Carlo standard errors, as well as tools for rule-based estimation of dates of production and use of artifact types, aligning and checking relative sequences, and evaluating the impact of the omission of relative/absolute events upon one another.
Exploratory principal component analysis for large-scale dataset, including sparse principal component analysis and sparse matrix approximation.
This package provides a tool for the preparation and enrichment of health datasets for analysis (Toner et al. (2023) <doi:10.1093/gigascience/giad030>). Provides functionality for assessing data quality and for improving the reliability and machine interpretability of a dataset. eHDPrep also enables semantic enrichment of a dataset where metavariables are discovered from the relationships between input variables determined from user-provided ontologies.
Two classifiers for open set recognition and novelty detection based on extreme value theory. The first classifier is based on the generalized Pareto distribution (GPD) and the second classifier is based on the generalized extreme value (GEV) distribution. For details, see Vignotto, E., & Engelke, S. (2018) <arXiv:1808.09902>.
Datasets from Nelson, Coffin and Copeland "Introductory Statistics for Engineering Experimentation" (Elsevier, 2003) with sample code.
The interface package to access data from the EpiGraphDB <https://epigraphdb.org> platform. It provides easy access to the EpiGraphDB platform with functions that query the corresponding REST endpoints on the API <https://api.epigraphdb.org> and return the response data in the tibble data frame format.
Import data from Epidata XML files .epx and convert it to R data structures.