Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
In Multidimensional Systems the When dimension allows us to express when the analysed facts have occurred. The purpose of this package is to provide support for implementing this dimension in the form of date and time tables for Relational On-Line Analytical Processing star database systems.
This package provides functions for easily creating interactive web pages using R Markdown that students can use in self-guided learning.
This package provides methods for estimating profit, profit-maximizing price, demand and consumer surplus of Word-of-Mouth-campaigns on mean-field networks.
The german Wikibook "GNU R" introduces R to new users. This package is a collection of functions and datas used in the german WikiBook "GNU R".
This package provides a new inverse probability of selection weighted Cox model to deal with outcome-dependent sampling in survival analysis.
Top-Down mass spectrometry aims to identify entire proteins as well as their (post-translational) modifications or ions bound (eg Chen et al (2018) <doi:10.1021/acs.analchem.7b04747>). The pattern of internal fragments (Haverland et al (2017) <doi:10.1007/s13361-017-1635-x>) may reveal important information about the original structure of the proteins studied (Skinner et al (2018) <doi:10.1038/nchembio.2515> and Li et al (2018) <doi:10.1038/nchem.2908>). However, the number of possible internal fragments gets huge with longer proteins and subsequent identification of internal fragments remains challenging, in particular since the the accuracy of measurements with current mass spectrometers represents a limiting factor. This package attempts to deal with the complexity of internal fragments and allows identification of terminal and internal fragments from deconvoluted mass-spectrometry data.
Using a time-varying random parameters model developed in Koutchade et al., (2024) <https://hal.science/hal-04318163>, this package allows allocating variable input costs among crops produced by farmers based on panel data including information on input expenditure aggregated at the farm level and acreage shares. It also considers in fairly way the weighting data and can allow integrating time-varying and time-constant control variables.
List of english scrabble words as listed in the OTCWL2014 <https://www.scrabbleplayers.org/w/Official_Tournament_and_Club_Word_List_2014_Edition>. Words are collated from the Word Game Dictionary <https://www.wordgamedictionary.com/word-lists/>.
This package provides unified syntax to write data from lazy dplyr tbl or dplyr sql query or a dataframe to a database table with modes such as create, append, insert, update, upsert, patch, delete, overwrite, overwrite_schema.
Retrieval the leaf area index (LAI) and soil moisture (SM) from microwave backscattering data using water cloud model (WCM) model . The WCM algorithm attributed to Pervot et al.(1993) <doi:10.1016/0034-4257(93)90053-Z>. The authors are grateful to SAC, ISRO, Ahmedabad for providing financial support to Dr. Prashant K Srivastava to conduct this research work.
Allows to turn standard R code into offensive programming code. Provides code instrumentation to ease this change and tools to assist and accelerate code production and tuning while using offensive programming code technics. Should improve code robustness and quality. Function calls can be easily verified on-demand or in batch mode to assess parameter types and length conformities. Should improve coders productivity as offensive programming reduces the code size due to reduced number of controls all along the call chain. Should speed up processing as many checks will be reduced to one single check.
Estimates the standard and weighted Elo (WElo, Angelini et al., 2022 <doi:10.1016/j.ejor.2021.04.011>) rates. The current version provides Elo and WElo rates for tennis, according to different systems of weights (games or sets) and scale factors (constant, proportional to the number of matches, with more weight on Grand Slam matches or matches played on a specific surface). Moreover, the package gives the possibility of estimating the (bootstrap) standard errors for the rates. Finally, the package includes betting functions that automatically select the matches on which place a bet.
Download and search data from the World Bank Indicators API', which provides access to nearly 16,000 time series indicators. See <https://datahelpdesk.worldbank.org/knowledgebase/articles/889392-about-the-indicators-api-documentation> for further details about the API.
This package provides survival analysis functions with support for time-dependent and subject-specific (e.g., propensity score) weighting. Implements weighted estimation for Cox models, Kaplan-Meier survival curves, and treatment differences with point-wise and simultaneous confidence bands. Includes restricted mean survival time (RMST) comparisons evaluated across all potential truncation times with both point-wise and simultaneous confidence bands. See Cole, S. R. & Hernán, M. A. (2004) <doi:10.1016/j.cmpb.2003.10.004> for methodological background.
Simulates the results of completed randomized controlled trials, as if they had been conducted as adaptive Multi-Arm Bandit (MAB) trials instead. Augmented inverse probability weighted estimation (AIPW), outlined by Hadad et al. (2021) <doi:10.1073/pnas.2014602118>, is used to robustly estimate the probability of success for each treatment arm under the adaptive design. Provides customization options to simulate perfect/imperfect information, stationary/non-stationary bandits, blocked treatment assignments, along with control augmentation, and other hybrid strategies for assigning treatment arms. The methods used in simulation were inspired by Offer-Westort et al. (2021) <doi:10.1111/ajps.12597>.
Evaluation of prediction performance of smaller regions of spectra for Chemometrics. Segmentation of spectra, evolving dimensions regions and sliding windows as selection methods. Election of the best model among those computed based on error metrics. Chen et al.(2017) <doi:10.1007/s00216-017-0218-9>.
Predicts individual race/ethnicity using surname, first name, middle name, geolocation, and other attributes, such as gender and age. The method utilizes Bayes Rule (with optional measurement error correction) to compute the posterior probability of each racial category for any given individual. The package implements methods described in Imai and Khanna (2016) "Improving Ecological Inference by Predicting Individual Ethnicity from Voter Registration Records" Political Analysis <DOI:10.1093/pan/mpw001> and Imai, Olivella, and Rosenman (2022) "Addressing census data problems in race imputation via fully Bayesian Improved Surname Geocoding and name supplements" <DOI:10.1126/sciadv.adc9824>. The package also incorporates the data described in Rosenman, Olivella, and Imai (2023) "Race and ethnicity data for first, middle, and surnames" <DOI:10.1038/s41597-023-02202-2>.
This package provides a toolkit to set up an R data package in a consistent structure. Automates tasks like tidy data export, data dictionary documentation, README and website creation, and citation management.
ETS stands for Error, Trend, and Seasonality, and it is a popular time series forecasting method. Wavelet decomposition can be used for denoising, compression, and feature extraction of signals. By removing the high-frequency components, wavelet decomposition can remove noise from the data while preserving important features. A hybrid Wavelet ETS (Error Trend-Seasonality) model has been developed for time series forecasting using algorithm of Anjoy and Paul (2017) <DOI:10.1007/s00521-017-3289-9>.
This package provides routing based on the path-tree Rust crate. The routing is general purpose in the sense that any type of R object can be associated with a path, not just a handler function.
New tools for the imputation of missing values in high-dimensional data are introduced using the non-parametric nearest neighbor methods. It includes weighted nearest neighbor imputation methods that use specific distances for selected variables. It includes an automatic procedure of cross validation and does not require prespecified values of the tuning parameters. It can be used to impute missing values in high-dimensional data when the sample size is smaller than the number of predictors. For more information see Faisal and Tutz (2017) <doi:10.1515/sagmb-2015-0098>.
Assessing predictive models of spatial data can be challenging, both because these models are typically built for extrapolating outside the original region represented by training data and due to potential spatially structured errors, with "hot spots" of higher than expected error clustered geographically due to spatial structure in the underlying data. Methods are provided for assessing models fit to spatial data, including approaches for measuring the spatial structure of model errors, assessing model predictions at multiple spatial scales, and evaluating where predictions can be made safely. Methods are particularly useful for models fit using the tidymodels framework. Methods include Moran's I ('Moran (1950) <doi:10.2307/2332142>), Geary's C ('Geary (1954) <doi:10.2307/2986645>), Getis-Ord's G ('Ord and Getis (1995) <doi:10.1111/j.1538-4632.1995.tb00912.x>), agreement coefficients from Ji and Gallo (2006) (<doi: 10.14358/PERS.72.7.823>), agreement metrics from Willmott (1981) (<doi: 10.1080/02723646.1981.10642213>) and Willmott et al'. (2012) (<doi: 10.1002/joc.2419>), an implementation of the area of applicability methodology from Meyer and Pebesma (2021) (<doi:10.1111/2041-210X.13650>), and an implementation of multi-scale assessment as described in Riemann et al'. (2010) (<doi:10.1016/j.rse.2010.05.010>).
This package provides API access to the Walmart Open API <https://developer.walmartlabs.com/>, that contains data about stores, Value of the day and products which includes names, sale prices, shipping rates and taxonomies.
Power calculator for the two-sample Wilcoxon-Mann-Whitney rank-sum test for a continuous outcome (Mollan, Trumble, Reifeis et. al., Mar. 2020) <doi:10.1080/10543406.2020.1730866> <arXiv:1901.04597>, (Mann and Whitney 1947) <doi:10.1214/aoms/1177730491>, (Shieh, Jan, and Randles 2006) <doi:10.1080/10485250500473099>.