Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Automatic open data acquisition from resources of Polish Head Office of Geodesy and Cartography ('GŠówny UrzÄ d Geodezji i Kartografii') (<https://www.gov.pl/web/gugik>). Available datasets include various types of numeric, raster and vector data, such as orthophotomaps, digital elevation models (digital terrain models, digital surface model, point clouds), state register of borders, spatial databases, geometries of cadastral parcels, 3D models of buildings, and more. It is also possible to geocode addresses or objects using the geocodePL_get() function.
This package provides a robust procedure is implemented to estimate means and covariance matrix of multiple variables with missing data using Huber weight and then to estimate a structural equation model.
This package provides an interface to access data from the International Union for Conservation of Nature (IUCN) Red List <https://api.iucnredlist.org/api-docs/index.html>. It allows users to retrieve up-to-date information on species conservation status, supporting biodiversity research and conservation efforts.
This package provides tools are provided for estimating, testing, and simulating abundance in a two-event (Petersen) mark-recapture experiment. Functions are given to calculate the Petersen, Chapman, and Bailey estimators and associated variances. However, the principal utility is a set of functions to simulate random draws from these estimators, and use these to conduct hypothesis tests and power calculations. Additionally, a set of functions are provided for generating confidence intervals via bootstrapping. Functions are also provided to test abundance estimator consistency under complete or partial stratification, and to calculate stratified or partially stratified estimators. Functions are also provided to calculate recommended sample sizes. Referenced methods can be found in Arnason et al. (1996) <ISSN:0706-6457>, Bailey (1951) <DOI:10.2307/2332575>, Bailey (1952) <DOI:10.2307/1913>, Chapman (1951) NAID:20001644490, Cohen (1988) ISBN:0-12-179060-6, Darroch (1961) <DOI:10.2307/2332748>, and Robson and Regier (1964) <ISSN:1548-8659>.
SyncroSim is a generalized framework for managing scenario-based datasets (<https://syncrosim.com/>). rsyncrosim provides an interface to SyncroSim'. Simulation models can be added to SyncroSim in order to transform these datasets, taking advantage of general features such as defining scenarios of model inputs, running Monte Carlo simulations, and summarizing model outputs. rsyncrosim requires SyncroSim 2.3.5 or higher (API documentation: <https://docs.syncrosim.com/>).
Implementation of the RESTK algorithm based on Markov's Inequality from Vilardell, Sergi, Serra, Isabel, Mezzetti, Enrico, Abella, Jaume, Cazorla, Francisco J. and Del Castillo, J. (2022). "Using Markov's Inequality with Power-Of-k Function for Probabilistic WCET Estimation". In 34th Euromicro Conference on Real-Time Systems (ECRTS 2022). Leibniz International Proceedings in Informatics (LIPIcs) 231 20:1-20:24. <doi:10.4230/LIPIcs.ECRTS.2022.20>. This work has been supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 772773).
This package provides functions for risk management and portfolio investment of securities with practical tools for data processing and plotting. Moreover, it contains functions which perform the COS Method, an option pricing method based on the Fourier-cosine series (Fang, F. (2008) <doi:10.1137/080718061>).
This package provides four boolean matrix factorization (BMF) methods. BMF has many applications like data mining and categorical data analysis. BMF is also known as boolean matrix decomposition (BMD) and was found to be an NP-hard (non-deterministic polynomial-time) problem. Currently implemented methods are Asso Miettinen, Pauli and others (2008) <doi:10.1109/TKDE.2008.53>, GreConD R. Belohlavek, V. Vychodil (2010) <doi:10.1016/j.jcss.2009.05.002> , GreConDPlus R. Belohlavek, V. Vychodil (2010) <doi:10.1016/j.jcss.2009.05.002> , topFiberM A. Desouki, M. Roeder, A. Ngonga (2019) <arXiv:1903.10326>.
Perform a regression analysis, generate a regression table, create a scatter plot, and download the results. It uses stargazer for generating regression tables and ggplot2 for creating plots. With just two lines of code, you can perform a regression analysis, visualize the results, and save the output. It is part of my make R easy project where one doesn't need to know how to use various packages in order to get results and makes it easily accessible to beginners. This is a part of my make R easy project. Help from ChatGPT was taken. References were Wickham (2016) <doi:10.1007/978-3-319-24277-4>.
ROSE (RObust Semiparametric Efficient) random forests for robust semiparametric efficient estimation in partially parametric models (containing generalised partially linear models). Details can be found in the paper by Young and Shah (2024) <doi:10.48550/arXiv.2410.03471>.
Univariate and multivariate methods to analyze randomized response (RR) survey designs (e.g., Warner, S. L. (1965). Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60, 63â 69, <doi:10.2307/2283137>). Besides univariate estimates of true proportions, RR variables can be used for correlations, as dependent variable in a logistic regression (with or without random effects), or as predictors in a linear regression (Heck, D. W., & Moshagen, M. (2018). RRreg: An R package for correlation and regression analyses of randomized response data. Journal of Statistical Software, 85(2), 1â 29, <doi:10.18637/jss.v085.i02>). For simulations and the estimation of statistical power, RR data can be generated according to several models. The implemented methods also allow to test the link between continuous covariates and dishonesty in cheating paradigms such as the coin-toss or dice-roll task (Moshagen, M., & Hilbig, B. E. (2017). The statistical analysis of cheating paradigms. Behavior Research Methods, 49, 724â 732, <doi:10.3758/s13428-016-0729-x>).
Terrestrial laser scanning (TLS) data processing and post-hurricane damage severity classification at the individual tree level using deep Learning. Further details were published in Klauberg et al. (2023) <doi:10.3390/rs15041165>.
This package provides functions for connecting to and interfacing with an Arduino or similar device. Functionality includes uploading of sketches, setting and reading digital and analog pins, and rudimentary servo control. This project is not affiliated with the Arduino company, <https://www.arduino.cc/>.
This package produces tables with the level of replication (number of replicates) and the experimental uncoded values of the quantitative factors to be used for rotatable Central Composite Design (CCD) experimentation and a 2-D contour plot of the corresponding variance of the predicted response according to Mead et al. (2012) <doi:10.1017/CBO9781139020879> design_ccd(), and analyzes CCD data with response surface methodology ccd_analysis(). A rotatable CCD provides values of the variance of the predicted response that are concentrically distributed around the average treatment combination used in the experimentation, which with uniform precision (implied by the use of several replicates at the average treatment combination) improves greatly the search and finding of an optimum response. These properties of a rotatable CCD represent undeniable advantages over the classical factorial design, as discussed by Panneton et al. (1999) <doi:10.13031/2013.13267> and Mead et al. (2012) <doi:10.1017/CBO9781139020879.018> among others.
Implementations of algorithms for data analysis based on the rough set theory (RST) and the fuzzy rough set theory (FRST). We not only provide implementations for the basic concepts of RST and FRST but also popular algorithms that derive from those theories. The methods included in the package can be divided into several categories based on their functionality: discretization, feature selection, instance selection, rule induction and classification based on nearest neighbors. RST was introduced by ZdzisÅ aw Pawlak in 1982 as a sophisticated mathematical tool to model and process imprecise or incomplete information. By using the indiscernibility relation for objects/instances, RST does not require additional parameters to analyze the data. FRST is an extension of RST. The FRST combines concepts of vagueness and indiscernibility that are expressed with fuzzy sets (as proposed by Zadeh, in 1965) and RST.
The header-only C++ template library FastAD for automatic differentiation <https://github.com/JamesYang007/FastAD> is provided by this package, along with a few illustrative examples that can all be called from R.
This package implements the rquery piped Codd-style query algebra using data.table'. This allows for a high-speed in memory implementation of Codd-style data manipulation tools.
This package provides a Java implementation of the RAKE algorithm ('Rose', S., Engel', D., Cramer', N. and Cowley', W. (2010) <doi:10.1002/9780470689646.ch1>), which can be used to extract keywords from documents without any training data.
Doubly ranked tests are nonparametric tests for grouped functional and multivariate data. The testing procedure first ranks a matrix (or three dimensional array) of data by column (if a matrix) or by cell (across the third dimension if an array). By default, it calculates a sufficient statistic for the subject's order within the sample using the observed ranks, taken over the columns or cells. Depending on the number of groups, G, the summarized ranks are then analyzed using either a Wilcoxon Rank Sum test (G = 2) or a Kruskal-Wallis (G greater than 2).
As of RStudio v1.3, the preferences in the Global Options dialog (and a number of other preferences that arenâ t) are now saved in simple, plain-text JSON files. This package provides an interface for working with these RStudio JSON preference files to easily make modifications without using the point-and-click option menus. This is particularly helpful when working on teams to ensure a unified experience across machines and utilizing settings for best practices.
Random walk functions to extract new variables based on clients transactional behaviour. For more details, see Eddin et al. (2021) <arXiv:2112.07508v3> and Oliveira et al. (2021) <arXiv:2102.05373v2>.
The Randomized Trait Community Clustering method (Triado-Margarit et al., 2019, <doi:10.1038/s41396-019-0454-4>) is a statistical approach which allows to determine whether if an observed trait clustering pattern is related to an increasing environmental constrain. The method 1) determines whether exists or not a trait clustering on the sampled communities and 2) assess if the observed clustering signal is related or not to an increasing environmental constrain along an environmental gradient. Also, when the effect of the environmental gradient is not linear, allows to determine consistent thresholds on the community assembly based on trait-values.
R Interface to JDemetra+ 3.x (<https://github.com/jdemetra>) time series analysis software. It offers full access to options and outputs of X-13', including Reg-ARIMA modelling (automatic AutoRegressive Integrated Moving Average (ARIMA) model with outlier detection and trading days adjustment) and X-11 decomposition.
Visualization platform for T cell receptor repertoire analysis output results. It includes comparison of sequence frequency among samples, network of similar sequences and convergent recombination source between species. Currently repertoire analysis is in early stage of development and requires new approaches for repertoire data examination and assessment as we intend to develop. No publication is available yet (will be available in the near future), Efroni (2021) <https:>.