Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
There is no ophthalmic researcher who has not had headaches from the handling of visual acuity entries. Different notations, untidy entries. This shall now be a matter of the past. Eye makes it as easy as pie to work with VA data - easy cleaning, easy conversion between Snellen, logMAR, ETDRS letters, and qualitative visual acuity shall never pester you again. The eye package automates the pesky task to count number of patients and eyes, and can help to clean data with easy re-coding for right and left eyes. It also contains functions to help reshaping eye side specific variables between wide and long format. Visual acuity conversion is based on Schulze-Bonsel et al. (2006) <doi:10.1167/iovs.05-0981>, Gregori et al. (2010) <doi:10.1097/iae.0b013e3181d87e04>, Beck et al. (2003) <doi:10.1016/s0002-9394(02)01825-1> and Bach (2007) <https://michaelbach.de/sci/acuity.html>.
This package provides methods for data analysis from an entropic perspective. These methods are nonparametric and perform well on non-ordinal data. Currently includes HeatMap() for visualizing distributional characteristics among multiple populations (groups).
Analysis and visualization of similarities between epilepsy ontologies based on text mining results by comparing ranked lists of co-occurring drug terms in the BioASQ corpus. The ranked result lists of neurological drug terms co-occurring with terms from the epilepsy ontologies EpSO, ESSO, EPILONT, EPISEM and FENICS undergo further analysis. The source data to create the ranked lists of drug names is produced using the text mining workflows described in Mueller, Bernd and Hagelstein, Alexandra (2016) <doi:10.4126/FRL01-006408558>, Mueller, Bernd et al. (2017) <doi:10.1007/978-3-319-58694-6_22>, Mueller, Bernd and Rebholz-Schuhmann, Dietrich (2020) <doi:10.1007/978-3-030-43887-6_52>, and Mueller, Bernd et al. (2022) <doi:10.1186/s13326-021-00258-w>.
Split experiment sentences by different experiment design given by the user and the result can be used in E-prime (<https://pstnet.com/products/e-prime/>).
This package provides methods for working with dose-finding clinical trials. We provide implementations of many dose-finding clinical trial designs, including the continual reassessment method (CRM) by O'Quigley et al. (1990) <doi:10.2307/2531628>, the toxicity probability interval (TPI) design by Ji et al. (2007) <doi:10.1177/1740774507079442>, the modified TPI (mTPI) design by Ji et al. (2010) <doi:10.1177/1740774510382799>, the Bayesian optimal interval design (BOIN) by Liu & Yuan (2015) <doi:10.1111/rssc.12089>, EffTox by Thall & Cook (2004) <doi:10.1111/j.0006-341X.2004.00218.x>; the design of Wages & Tait (2015) <doi:10.1080/10543406.2014.920873>, and the 3+3 described by Korn et al. (1994) <doi:10.1002/sim.4780131802>. All designs are implemented with a common interface. We also offer optional additional classes to tailor the behaviour of all designs, including avoiding skipping doses, stopping after n patients have been treated at the recommended dose, stopping when a toxicity condition is met, or demanding that n patients are treated before stopping is allowed. By daisy-chaining together these classes using the pipe operator from magrittr', it is simple to tailor the behaviour of a dose-finding design so it behaves how the trialist wants. Having provided a flexible interface for specifying designs, we then provide functions to run simulations and calculate dose-paths for future cohorts of patients.
Implementation of the Centre of Gravity method and the Extrapolated Centre of Gravity method. It supports replicated observations. Cameron, D.G., et al (1982) <doi:10.1366/0003702824638610> JCGM (2008) <doi:10.59161/JCGM100-2008E>.
Simulates and estimates the Exponential Random Partition Model presented in the paper Hoffman, Block, and Snijders (2023) <doi:10.1177/00811750221145166>. It can also be used to estimate longitudinal partitions, following the model proposed in Hoffman and Chabot (2023) <doi:10.1016/j.socnet.2023.04.002>. The model is an exponential family distribution on the space of partitions (sets of non-overlapping groups) and is called in reference to the Exponential Random Graph Models (ERGM) for networks.
Digital simulation of electrochemical processes. Each function allows for implicit and explicit solution of the differential equation using methods like Euler, Backwards implicit, Runge Kutta 4, Crank Nicholson and Backward differentiation formula as well as different number of points for derivative approximation. Several electrochemical processes can be simulated such as: Chronoamperometry, Potential Step, Linear Sweep, Cyclic Voltammetry, Cyclic Voltammetry with electrochemical reaction followed by chemical reaction (EC mechanism) and CV with two following electrochemical reaction (EE mechanism). In update 1.1.0 has been added a general purpose CV function that allow to simulate up to 4 EE mechanism combined with chemical reaction for each species.Update 1.2.0 improved the accuracy of the measurements and allow personalized data resolution for simulation. Bibliography regarding this methods can be found in the following texts. Dieter Britz, Jorg Strutwolf (2016) <ISBN:978-3-319-30292-8>. Allen J. Bard, Larry R. Faulkner (2000) <ISBN:978-0-471-04372-0>.
This is a collection of data files for exploring sightings of wild things, relative to weather and tourism patterns in Australia.
An approach and software for modelling marine and freshwater ecosystems. It is articulated entirely around trophic levels. EcoTroph's key displays are bivariate plots, with trophic levels as the abscissa, and biomass flows or related quantities as ordinates. Thus, trophic ecosystem functioning can be modelled as a continuous flow of biomass surging up the food web, from lower to higher trophic levels, due to predation and ontogenic processes. Such an approach, wherein species as such disappear, may be viewed as the ultimate stage in the use of the trophic level metric for ecosystem modelling, providing a simplified but potentially useful caricature of ecosystem functioning and impacts of fishing. This version contains catch trophic spectrum analysis (CTSA) function and corrected versions of the mf.diagnosis and create.ETmain functions.
This package contains a set of clustering methods and evaluation metrics to select the best number of the clusters based on clustering stability. Two references describe the methodology: Fahimeh Nezhadmoghadam, and Jose Tamez-Pena (2021)<doi:10.1016/j.compbiomed.2021.104753>, and Fahimeh Nezhadmoghadam, et al.(2021)<doi:10.2174/1567205018666210831145825>.
Wrapper for the ggplot2 package that creates a variety of common charts (e.g. bar, line, area, ROC, waterfall, pie) while aiming to reduce typing.
This includes a dataset on the outcomes of the USA presidential elections since 1920, and various predictors, as used in <https://www.vanderwalresearch.com/blog/15-elections>.
Allows R users to retrieve and parse data from the Urban Institute's Education Data API <https://educationdata.urban.org/> into a data.frame for analysis.
Tailored explicitly for Experience Sampling Method (ESM) data, it contains a suite of functions designed to simplify preprocessing steps and create subsequent reporting. It empowers users with capabilities to extract critical insights during preprocessing, conducts thorough data quality assessments (e.g., design and sampling scheme checks, compliance rate, careless responses), and generates visualizations and concise summary tables tailored specifically for ESM data. Additionally, it streamlines the creation of informative and interactive preprocessing reports, enabling researchers to transparently share their dataset preprocessing methodologies. Finally, it is part of a larger ecosystem which includes a framework and a web gallery (<https://preprocess.esmtools.com/>).
This package provides functions that support estimating, assessing and mapping regional disaggregated indicators. So far, estimation methods comprise direct estimation, the model-based unit-level approach Empirical Best Prediction (see "Small area estimation of poverty indicators" by Molina and Rao (2010) <doi:10.1002/cjs.10051>), the area-level model (see "Estimates of income for small places: An application of James-Stein procedures to Census Data" by Fay and Herriot (1979) <doi:10.1080/01621459.1979.10482505>) and various extensions of it (adjusted variance estimation methods, log and arcsin transformation, spatial, robust and measurement error models), as well as their precision estimates. The assessment of the used model is supported by a summary and diagnostic plots. For a suitable presentation of estimates, map plots can be easily created. Furthermore, results can easily be exported to excel. For a detailed description of the package and the methods used see "The R Package emdi for Estimating and Mapping Regionally Disaggregated Indicators" by Kreutzmann et al. (2019) <doi:10.18637/jss.v091.i07> and the second package vignette "A Framework for Producing Small Area Estimates Based on Area-Level Models in R".
Perform dynamic model averaging with grid search as in Dangl and Halling (2012) <doi:10.1016/j.jfineco.2012.04.003> using parallel computing.
This package provides tools for transforming R expressions. Provides functions for finding, extracting, and replacing patterns in R language objects, similarly to how regular expressions can be used to find, extract, and replace patterns in text. Also provides functions for generating code using specially-formatted template files and for translating R expressions into similar expressions in other programming languages. The package may be helpful for advanced uses of R expressions, such as developing domain-specific languages.
Training and predict functions for Single Hidden-layer Feedforward Neural Networks (SLFN) using the Extreme Learning Machine (ELM) algorithm. The ELM algorithm differs from the traditional gradient-based algorithms for very short training times (it doesn't need any iterative tuning, this makes learning time very fast) and there is no need to set any other parameters like learning rate, momentum, epochs, etc. This is a reimplementation of the elmNN package using RcppArmadillo after the elmNN package was archived. For more information, see "Extreme learning machine: Theory and applications" by Guang-Bin Huang, Qin-Yu Zhu, Chee-Kheong Siew (2006), Elsevier B.V, <doi:10.1016/j.neucom.2005.12.126>.
This package provides a tool to operate a batch of univariate or multivariate Cox models and return tidy result.
This package provides a Shiny'-based toolkit for item/test analysis. It is designed for multiple-choice, true-false, and open-ended questions. The toolkit is usable with datasets in 1-0 or other formats. Key analyses include difficulty, discrimination, response-option analysis, reports. The classical test theory methods used are described in Ebel & Frisbie (1991, ISBN:978-0132892314).
If translate English or Chinese sentence, there is a faster way for R user. You can pass in an English or Chinese sentence, ecce package support both English and Chinese translation. It also support browse translation results in website. In addition, also support obtain the pinyin of the Chinese character, you can more easily understand the pronunciation of the Chinese character.
The Australian Regulatory Guidelines for Prescription Medicines (ARGPM), guidance on "Stability testing for prescription medicines", recommends to predict the shelf life of chemically derived medicines from stability data by taking the worst case situation at batch release into account. Consequently, if a change over time is observed, a release limit needs to be specified. Finding a release limit and the associated shelf life is supported, as well as the standard approach that is recommended by guidance Q1E "Evaluation of stability data" from the International Council for Harmonisation (ICH).
This package implements methods for functional data analysis based on the epigraph and hypograph indices. These methods transform functional datasets, whether in one or multiple dimensions, into multivariate datasets. The transformation involves applying the epigraph, hypograph, and their modified versions to both the original curves and their first and second derivatives. The calculation of these indices is tailored to the dimensionality of the functional dataset, with special considerations for dependencies between dimensions in multidimensional cases. This approach extends traditional multivariate data analysis techniques to the functional data setting. A key application of this package is the EHyClus method, which enhances clustering analysis for functional data across one or multiple dimensions using the epigraph and hypograph indices. See Pulido et al. (2023) <doi:10.1007/s11222-023-10213-7> and Pulido et al. (2024) <doi:10.48550/arXiv.2307.16720>.