Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a framework with tools to compare two random variables via stochastic dominance. See the README.md at <https://github.com/EtorArza/RVCompare> for a quick start guide. It can compute the Cp and Cd of two probability distributions and the Cumulative Difference Plot as explained in E. Arza (2022) <doi:10.1080/10618600.2022.2084405>. Uses bootstrap or DKW-bounds to compute the confidence bands of the cumulative distributions. These two methods are described in B. Efron. (1979) <doi:10.1214/aos/1176344552> and P. Massart (1990) <doi:10.1214/aop/1176990746>.
The TRUST4 or MiXCR is used to identify the clonotypes. The goal of rTCRBCRr is to process the results from these clonotyping tools, and analyze the clonotype repertoire metrics based on chain names and IGH isotypes. The manuscript is still under preparation for publication for now. The references describing the methods in this package will be added later.
This package performs all steps in the credit scoring process. This package allows the user to follow all the necessary steps for building an effective scorecard. It provides the user functions for coarse binning of variables, Weights of Evidence (WOE) transformation, variable clustering, custom binning, visualization, and scaling of logistic regression coefficients. The results will generate a scorecard that can be used as an effective credit scoring tool to evaluate risk. For complete details on the credit scoring process, see Siddiqi (2005, ISBN:047175451X).
This package provides functions for a classification method based on receiver operating characteristics (ROC). Briefly, features are selected according to their ranked AUC value in the training set. The selected features are merged by the mean value to form a meta-gene. The samples are ranked by their meta-gene value and the meta-gene threshold that has the highest accuracy in splitting the training samples is determined. A new sample is classified by its meta-gene value relative to the threshold. In the first place, the package is aimed at two class problems in gene expression data, but might also apply to other problems.
High level and easy HTTP client for R'. Provides functions for building HTTP queries, including query parameters, body requests, headers, authentication, and more.
This function conducts variation partitioning and hierarchical partitioning to calculate the unique, shared (referred as to "common") and individual contributions of each predictor (or matrix) towards explained variation (R-square and adjusted R-square) on canonical analysis (RDA,CCA and db-RDA), applying the algorithm of Lai J.,Zou Y., Zhang J.,Peres-Neto P.(2022) Generalizing hierarchical and variation partitioning in multiple regression and canonical analyses using the rdacca.hp R package.Methods in Ecology and Evolution,13: 782-788 <DOI:10.1111/2041-210X.13800>.
Reads river network shape files and computes network distances. Also included are a variety of computation and graphical tools designed for fisheries telemetry research, such as minimum home range, kernel density estimation, and clustering analysis using empirical k-functions with a bootstrap envelope. Tools are also provided for editing the river networks, meaning there is no reliance on external software.
We provide functions to perform an empirical small telescopes analysis. This package contains 2 functions, SmallTelescopes() and EstimatePower(). Users only need to call SmallTelescopes() to conduct the analysis. For more information on small telescopes analysis see Uri Simonsohn (2015) <doi:10.1177/0956797614567341>.
Computation of one-, two- and three-dimensional pseudo-observations based on recurrent events and terminal events. Generalised linear models are fitted using generalised estimating equations. Technical details on the bivariate procedure can be found in "Bivariate pseudo-observations for recurrent event analysis with terminal events" (Furberg et al., 2021) <doi:10.1007/s10985-021-09533-5>.
Systematic conservation prioritization with robust optimization techniques. This is important because conservation prioritizations typically only consider the most likely outcome associated with a conservation action (e.g., establishing a protected area will safeguard a threatened species population) and fail to consider other outcomes and their consequences for meeting conservation objectives. By extending the prioritizr package, this package can be used to generate conservation prioritizations that account of uncertainty in the climate change scenario projections, species distribution models, ecosystem service models, and measurement errors. In particular, prioritizations can be generated to be fully robust to uncertainty by minimizing (or maximizing) objectives under the worst possible outcome. Since reducing the associated with achieving conservation objectives may sacrifice other objectives (e.g., minimizing protected area implementation costs), prioritizations can also be generated to be partially robust based on a specified confidence level parameter. Partially robust prioritizations can be generated based on the chance constrained programming problem (Charnes & Cooper 1959, <doi:10.1287/mnsc.6.1.73>) and the conditional value-at-risk problem (Rockafellar & Uryasev 2000, <doi:10.21314/JOR.2000.038>).
Data Envelopment Analysis for R, estimating robust DEA scores without and with environmental variables and doing returns-to-scale tests.
For a multisite replication project, computes the consistency metric P_orig, which is the probability that the original study would observe an estimated effect size as extreme or more extreme than it actually did, if in fact the original study were statistically consistent with the replications. Other recommended metrics are: (1) the probability of a true effect of scientifically meaningful size in the same direction as the estimate the original study; and (2) the probability of a true effect of meaningful size in the direction opposite the original study's estimate. These two can be computed using the package \codeMetaUtility::prop_stronger. Additionally computes older metrics used in replication projects (namely expected agreement in "statistical significance" between an original study and replication studies as well as prediction intervals for the replication estimates). See Mathur and VanderWeele (under review; <https://osf.io/apnjk/>) for details.
Create and combine HTML and PDF reports from within R. Possibility to design tables and listings for reporting and also include R plots.
This package provides a robust procedure is implemented to estimate means and covariance matrix of multiple variables with missing data using Huber weight and then to estimate a structural equation model.
This package provides R-squared values and standardized regression coefficients for linear models applied to multiply imputed datasets as obtained by mice'. Confidence intervals, zero-order correlations, and alternative adjusted R-squared estimates are also available. The methods are described in Van Ginkel and Karch (2024) <doi:10.1111/bmsp.12344> and in Van Ginkel (2020) <doi:10.1007/s11336-020-09696-4>.
Build robust and maintainable software with object-oriented design patterns in R. Design patterns abstract and present in neat, well-defined components and interfaces the experience of many software designers and architects over many years of solving similar problems. These are solutions that have withstood the test of time with respect to re-usability, flexibility, and maintainability. R6P provides abstract base classes with examples for a few known design patterns. The patterns were selected by their applicability to analytic projects in R. Using these patterns in R projects have proven effective in dealing with the complexity that data-driven applications possess.
An interface to iDigBio's search API that allows downloading specimen records. Searches are returned as a data.frame. Other functions such as the metadata end points return lists of information. iDigBio is a US project focused on digitizing and serving museum specimen collections on the web. See <https://www.idigbio.org> for information on iDigBio.
This package provides a pure R implementation of the median cut algorithm. Extracts the dominant colors from an image, and turns them into a scale for use in plots or for fun!
Download the lyrics of your favorite songs in text and table formats. Also search for related songs or song information. More information: <https://docs.genius.com/> .
By placing on a circle 10 points numbered from 1 to 10, and connecting them by a straight line to the point corresponding to its multiplication by 2. (1 must be connected to 1 * 2 = 2, point 2 must be set to 2 * 2 = 4, point 3 to 3 * 2 = 6 and so on). You will obtain an amazing geometric figure that complicates and beautifies itself by varying the number of points and the multiplication table you use.
Extension to REddyProc that allows reading data from netCDF files.
This package implements the Simulating Optimal FUNctioning framework for site-scale simulations of ecosystem processes, including model calibration. It contains Fortran 90 modules for the P-model (Stocker et al. (2020) <doi:10.5194/gmd-13-1545-2020>), SPLASH (Davis et al. (2017) <doi:10.5194/gmd-10-689-2017>) and BiomeE (Weng et al. (2015) <doi:10.5194/bg-12-2655-2015>).
Exchange rate for Kenya Shilling against other currencies, US DOLLAR, EURO, STERLING POUND, Tanzania Shilling, Uganda Shilling.
This package provides a collection of functions to simulate dice rolls and the like. In particular, experiments and exercises can be performed looking at combinations and permutations of values in dice rolls and coin flips, together with the corresponding frequencies of occurrences. When applying each function, the user has to input the number of times (rolls, flips) to toss the dice. Needless to say, the more the tosses, the more the frequencies approximate the actual probabilities. Moreover, the package provides functions to generate non-transitive sets of dice (like Efron's) and to check whether a given set of dice is non-transitive with given probability.