Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a reliable and flexible toolbox to score patient-reported outcome (PRO), Quality of Life (QOL), and other psychometric measures. The guiding philosophy is that scoring errors can be eliminated by using a limited number of well-tested, well-behaved functions to score PRO-like measures. The workhorse of the package is the scoreScale function, which can be used to score most single-scale measures. It can reverse code items that need to be reversed before scoring and pro-rate scores for missing item data. Currently, three different types of scores can be output: summed item scores, mean item scores, and scores scaled to range from 0 to 100. The PROscorerTools functions can be used to write new functions that score more complex measures. In fact, PROscorerTools functions are the building blocks of the scoring functions in the PROscorer package (which is a repository of functions that score specific commonly-used instruments). Users are encouraged to use PROscorerTools to write scoring functions for their favorite PRO-like instruments, and to submit these functions for inclusion in PROscorer (a tutorial vignette will be added soon). The long-term vision for the PROscorerTools and PROscorer packages is to provide an easy-to-use system to facilitate the incorporation of PRO measures into research studies in a scientifically rigorous and reproducible manner. These packages and their vignettes are intended to help establish and promote "best practices" for scoring and describing PRO-like measures in research.
This package provides tools from the domain of graph theory can be used to quantify the complexity and vulnerability to failure of a software package. That is the guiding philosophy of this package. pkgnet provides tools to analyze the dependencies between functions in an R package and between its imported packages. See the pkgnet website for vignettes and other supplementary information.
Automates the process of creating a scale bar and north arrow in any package that uses base graphics to plot in R. Bounding box tools help find and manipulate extents. Finally, there is a function to automate the process of setting margins, plotting the map, scale bar, and north arrow, and resetting graphic parameters upon completion.
This package provides methods for plotting potentially large (raster) images interactively on a plain HTML canvas. In contrast to package mapview data are plotted without background map, but data can be projected to any spatial coordinate reference system. Supports plotting of classes RasterLayer', RasterStack', RasterBrick (from package raster') as well as png files located on disk. Interactivity includes zooming, panning, and mouse location information. In case of multi-layer RasterStacks or RasterBricks', RGB image plots are created (similar to raster::plotRGB - but interactive).
An extensible repository of accurate, up-to-date functions to score commonly used patient-reported outcome (PRO), quality of life (QOL), and other psychometric and psychological measures. PROscorer', together with the PROscorerTools package, is a system to facilitate the incorporation of PRO measures into research studies and clinical settings in a scientifically rigorous and reproducible manner. These packages and their vignettes are intended to help establish and promote best practices for scoring PRO and PRO-like measures in research. The PROscorer Instrument Descriptions vignette contains descriptions of each instrument scored by PROscorer', complete with references. These instrument descriptions are suitable for inclusion in formal study protocol documents, grant proposals, and manuscript Method sections. Each PROscorer function is composed of helper functions from the PROscorerTools package, and users are encouraged to contribute new functions to PROscorer'. More scoring functions are currently in development and will be added in future updates.
Annotate plots with legends for continuous variables and colour spectra using the base graphics plotting tools; and manipulate irregular polygons. Includes palettes for colour-blind viewers.
Power logit regression models for bounded continuous data, in which the density generator may be normal, Student-t, power exponential, slash, hyperbolic, sinh-normal, or type II logistic. Diagnostic tools associated with the fitted model, such as the residuals, local influence measures, leverage measures, and goodness-of-fit statistics, are implemented. The estimation process follows the maximum likelihood approach and, currently, the package supports two types of estimators: the usual maximum likelihood estimator and the penalized maximum likelihood estimator. More details about power logit regression models are described in Queiroz and Ferrari (2022) <arXiv:2202.01697>.
This package provides a doubly robust precision medicine approach to fit, cross-validate and visualize prediction models for the conditional average treatment effect (CATE). It implements doubly robust estimation and semiparametric modeling approach of treatment-covariate interactions as proposed by Yadlowsky et al. (2020) <doi:10.1080/01621459.2020.1772080>.
Toolkit for fitting point process models with sequences of LASSO penalties ("regularisation paths"), as described in Renner, I.W. and Warton, D.I. (2013) <doi:10.1111/j.1541-0420.2012.01824.x>. Regularisation paths of Poisson point process models or area-interaction models can be fitted with LASSO, adaptive LASSO or elastic net penalties. A number of criteria are available to judge the bias-variance tradeoff.
This package provides a system to increase the efficiency of dynamic web-scraping with RSelenium by leveraging parallel processing. You provide a function wrapper for your RSelenium scraping routine with a set of inputs, and parsel runs it in several browser instances. Chunked input processing as well as error catching and logging ensures seamless execution and minimal data loss, even when unforeseen RSelenium errors occur. You can additionally build safe scraping functions with minimal coding by utilizing constructor functions that act as wrappers around RSelenium methods.
We implement two least-squares estimators under k-monotony constraint using a method based on the Support Reduction Algorithm from Groeneboom et al (2008) <DOI:10.1111/j.1467-9469.2007.00588.x>. The first one is a projection estimator on the set of k-monotone discrete functions. The second one is a projection on the set of k-monotone discrete probabilities. This package provides functions to generate samples from the spline basis from Lefevre and Loisel (2013) <DOI:10.1239/jap/1378401239>, and from mixtures of splines.
This package provides a probabilistic framework that integrates Data Envelopment Analysis (DEA) (Banker et al., 1984) <doi:10.1287/mnsc.30.9.1078> with machine learning classifiers (Kuhn, 2008) <doi:10.18637/jss.v028.i05> to estimate both the (in)efficiency status and the probability of efficiency for decision-making units. The approach trains predictive models on DEA-derived efficiency labels (Charnes et al., 1985) <doi:10.1016/0304-4076(85)90133-2>, enabling explainable artificial intelligence (XAI) workflows with global and local interpretability tools, including permutation importance (Molnar et al., 2018) <doi:10.21105/joss.00786>, Shapley value explanations (Strumbelj & Kononenko, 2014) <doi:10.1007/s10115-013-0679-x>, and sensitivity analysis (Cortez, 2011) <https://CRAN.R-project.org/package=rminer>. The framework also supports probability-threshold peer selection and counterfactual improvement recommendations for benchmarking and policy evaluation. The probabilistic efficiency framework is detailed in González-Moyano et al. (2025) "Probability-based Technical Efficiency Analysis through Machine Learning", in review for publication.
Simulation of continuous, correlated high-dimensional data with time to event or binary response, and parallelized functions for Lasso, Ridge, and Elastic Net penalized regression with repeated starts and two-dimensional tuning of the Elastic Net.
R API for Pathling', a tool for querying and transforming electronic health record data that is represented using the Fast Healthcare Interoperability Resources (FHIR) standard - see <https://pathling.csiro.au/docs>.
Read depth data from genotyping-by-sequencing (GBS) or restriction site-associated DNA sequencing (RAD-seq) are imported and used to make Bayesian probability estimates of genotypes in polyploids or diploids. The genotype probabilities, posterior mean genotypes, or most probable genotypes can then be exported for downstream analysis. polyRAD is described by Clark et al. (2019) <doi:10.1534/g3.118.200913>, and the Hind/He statistic for marker filtering is described by Clark et al. (2022) <doi:10.1186/s12859-022-04635-9>. A variant calling pipeline for highly duplicated genomes is also included and is described by Clark et al. (2020, Version 1) <doi:10.1101/2020.01.11.902890>.
The name of the package is derived from the French, pour ridge, and provides functionality for ridge-type estimation of a potpourri of models. Currently, this estimation concerns that of various Gaussian graphical models from different study designs. Among others it considers the regular Gaussian graphical model and a mixture of such models. The porridge-package implements the estimation of the former either from i) data with replicated observations by penalized loglikelihood maximization using the regular ridge penalty on the parameters (van Wieringen, Chen, 2021) or ii) from non-replicated data by means of either a ridge estimator with multiple shrinkage targets (as presented in van Wieringen et al. 2020, <doi:10.1016/j.jmva.2020.104621>) or the generalized ridge estimator that allows for both the inclusion of quantitative and qualitative prior information on the precision matrix via element-wise penalization and shrinkage (van Wieringen, 2019, <doi:10.1080/10618600.2019.1604374>). Additionally, the porridge-package facilitates the ridge penalized estimation of a mixture of Gaussian graphical models (Aflakparast et al., 2018). On another note, the package also includes functionality for ridge-type estimation of the generalized linear model (as presented in van Wieringen, Binder, 2022, <doi:10.1080/10618600.2022.2035231>).
Calculate parametric mortality and Fertility models, following packages BaSTA in Colchero, Jones and Rebke (2012) <doi:10.1111/j.2041-210X.2012.00186.x> and BaFTA <https://github.com/fercol/BaFTA>, summary statistics (e.g. ageing rates, life expectancy, lifespan equality, etc.), life table and product limit estimators from census data.
When working with big data sets, RAM conservation is critically important. However, it is not always enough to just monitor the size of the objects created. So-called "copy-on-modify" behavior, characteristic of R, means that some expressions or functions may require an unexpectedly large amount of RAM overhead. For example, replacing a single value in a matrix duplicates that matrix in the back-end, making this task require twice as much RAM as that used by the matrix itself. This package makes it easy to monitor the total and peak RAM used so that developers can quickly identify and eliminate RAM hungry code.
This package contains the functions for construction and visualization of underlying and reflexivity graphs of the three families of the proximity catch digraphs (PCDs), see (Ceyhan (2005) ISBN:978-3-639-19063-2), and for computing the edge density of these PCD-based graphs which are then used for testing the patterns of segregation and association against complete spatial randomness (CSR)) or uniformity in one and two dimensional cases. The PCD families considered are Arc-Slice PCDs, Proportional-Edge (PE) PCDs (Ceyhan et al. (2006) <doi:10.1016/j.csda.2005.03.002>) and Central Similarity PCDs (Ceyhan et al. (2007) <doi:10.1002/cjs.5550350106>). See also (Ceyhan (2016) <doi:10.1016/j.stamet.2016.07.003>) for edge density of the underlying and reflexivity graphs of PE-PCDs. The package also has tools for visualization of PCD-based graphs for one, two, and three dimensional data.
This package provides a bioinformatics method developed for analyzing the heterogeneity of single-cell populations. Phitest provides an objective and automatic method to evaluate the performance of clustering and quality of cell clusters.
Facilitates the testing of causal relationships among lineage-pair traits in a phylogenetically informed context. Lineage-pair traits are characters that are defined for pairs of lineages instead of individual taxa. Examples include the strength of reproductive isolation, range overlap, competition coefficient, diet niche similarity, and relative hybrid fitness. Users supply a lineage-pair dataset and a phylogeny. phylopairs calculates a covariance matrix for the pairwise-defined data and provides built-in models to test for relationships among variables while taking this covariance into account. Bayesian sampling is run through built-in Stan programs via the rstan package. The various models and methods that this package makes available are described in Anderson et al. (In Review), Coyne and Orr (1989) <doi:10.1111/j.1558-5646.1989.tb04233.x>, Fitzpatrick (2002) <doi:10.1111/j.0014-3820.2002.tb00860.x>, and Castillo (2007) <doi:10.1002/ece3.3093>.
The data sets used in the online course ,,PogromcyDanych''. You can process data in many ways. The course Data Crunchers will introduce you to this variety. For this reason we will work on datasets of different size (from several to several hundred thousand rows), with various level of complexity (from two to two thousand columns) and prepared in different formats (text data, quantitative data and qualitative data). All of these data sets were gathered in a single big package called PogromcyDanych to facilitate access to them. It contains all sorts of data sets such as data about offer prices of cars, results of opinion polls, information about changes in stock market indices, data about names given to newborn babies, ski jumping results or information about outcomes of breast cancer patients treatment.
Levels and changes of productivity and profitability are measured with various indices. The package contains the multiplicatively complete Färe-Primont, Fisher, Hicks-Moorsteen, Laspeyres, Lowe, and Paasche indices, as well as the classic Malmquist productivity index. Färe-Primont and Lowe indices verify the transitivity property and can therefore be used for multilateral or multitemporal comparison. Fisher, Hicks-Moorsteen, Laspeyres, Malmquist, and Paasche indices are not transitive and are only to be used for binary comparison. All indices can also be decomposed into different components, providing insightful information on the sources of productivity and profitability changes. In the use of Malmquist productivity index, the technological change index can be further decomposed into bias technological change components. The package also allows to prohibit technological regression (negative technological change). In the case of the Fisher, Hicks-Moorsteen, Laspeyres, Paasche and the transitive Färe-Primont and Lowe indices, it is furthermore possible to rule out technological change. Deflated shadow prices can also be obtained. Besides, the package allows parallel computing as an option, depending on the user's computer configuration. All computations are carried out with the nonparametric Data Envelopment Analysis (DEA), and several assumptions regarding returns to scale are available. All DEA linear programs are implemented using lp_solve'.
Estimation of two- and three-way dynamic panel threshold regression models (Di Lascio and Perazzini (2024) <https://repec.unibz.it/bemps104.pdf>; Di Lascio and Perazzini (2022, ISBN:978-88-9193-231-0); Seo and Shin (2016) <doi:10.1016/j.jeconom.2016.03.005>) through the generalized method of moments based on the first difference transformation and the use of instrumental variables. The models can be used to find a change point detection in the time series. In addition, random number generation is also implemented.