Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a wrapper for the Filebin API. Filebin implements convenient file sharing on the web.
Fast and numerically stable estimation of a covariance matrix by banding the Cholesky factor using a modified Gram-Schmidt algorithm implemented in RcppArmadilo. See <http://stat.umn.edu/~molst029> for details on the algorithm.
Access and retrieve vocabulary data Finto API <https://api.finto.fi/>, which is a centralized service for interoperable thesauri, ontology and classification schemes for different subject areas.
Processing of large-in-memory/large-on disk rasters and spatial vectors using GRASS <https://grass.osgeo.org/>. Most functions in the terra package are recreated. Processing of medium-sized and smaller spatial objects will nearly always be faster using terra or sf', but for large-in-memory/large-on-disk objects, fasterRaster may be faster. To use most of the functions, you must have the stand-alone version (not the OSGeoW4 installer version) of GRASS 8.0 or higher.
Create secure, encrypted, and password-protected static HTML documents that include the machinery for secure in-browser authentication and decryption.
Simplifies the process of importing and managing input-output matrices from Microsoft Excel into R, and provides a suite of functions for analysis. It leverages the R6 class for clean, memory-efficient object-oriented programming. Furthermore, all linear algebra computations are implemented in Rust to achieve highly optimized performance.
Process raw force-plate data (txt-files) by segmenting them into trials and, if needed, calculating (user-defined) descriptive statistics of variables for user-defined time bins (relative to trigger onsets) for each trial. When segmenting the data a baseline correction, a filter, and a data imputation can be applied if needed. Experimental data can also be processed and combined with the segmented force-plate data. This procedure is suggested by Johannsen et al. (2023) <doi:10.6084/m9.figshare.22190155> and some of the options (e.g., choice of low-pass filter) are also suggested by Winter (2009) <doi:10.1002/9780470549148>.
This package provides a compositional statistical framework for absolute proportion estimation between fractions in RNA sequencing data. FracFixR addresses the fundamental challenge in fractionated RNA-seq experiments where library preparation and sequencing depth obscure the original proportions of RNA fractions. It reconstructs original fraction proportions using non-negative linear regression, estimates the "lost" unrecoverable fraction, corrects individual transcript frequencies, and performs differential proportion testing between conditions. Supports any RNA fractionation protocol including polysome profiling, sub-cellular localization, and RNA-protein complex isolation.
The Food and Agriculture Organization of the United Nations (FAO) FishStat database is the leading source of global fishery and aquaculture statistics and provides unique information for sector analysis and monitoring. This package provides the global production data from all fisheries and aquaculture in R format, ready for analysis.
Use R as a minimal build system. This might come in handy if you are developing R packages and can not use a proper build system. Stay away if you can (use a proper build system).
Fuzzy string matching implementation of the fuzzywuzzy <https://github.com/seatgeek/fuzzywuzzy> python package. It uses the Levenshtein Distance <https://en.wikipedia.org/wiki/Levenshtein_distance> to calculate the differences between sequences.
This package provides robust tests for testing in GLMs, by sign-flipping score contributions. The tests are robust against overdispersion, heteroscedasticity and, in some cases, ignored nuisance variables. See Hemerik, Goeman and Finos (2020) <doi:10.1111/rssb.12369>.
Description: Provides comprehensive tools for analysing and characterizing mixed-level factorial designs arranged in blocks. Includes construction and validation of incidence structures, computation of C-matrices, evaluation of A-, D-, E-, and MV-efficiencies, checking of orthogonal factorial structure (OFS), diagnostics based on Hamming distance, discrepancy measures, B-criterion, Es^2 statistics, J2-distance and J2-efficiency, Phi-p optimality, and symmetry conditions for universal optimality. The methodological framework follows foundational work on factorial and mixed-level design assessment by Xu and Wu (2001) <doi:10.1214/aos/1013699993>, and Gupta (1983) <doi:10.1111/j.2517-6161.1983.tb01253.x>. These methods assist in selecting, comparing, and studying factorial block designs across a range of experimental situations.
This package creates a HTML widget which displays the results of searching for a pattern in files in a given git repository, including all its branches. The results can also be returned in a dataframe.
This package provides a versatile package that provides implementation of various methods of Functional Data Analysis (FDA) and Empirical Dynamics. The core of this package is Functional Principal Component Analysis (FPCA), a key technique for functional data analysis, for sparsely or densely sampled random trajectories and time courses, via the Principal Analysis by Conditional Estimation (PACE) algorithm. This core algorithm yields covariance and mean functions, eigenfunctions and principal component (scores), for both functional data and derivatives, for both dense (functional) and sparse (longitudinal) sampling designs. For sparse designs, it provides fitted continuous trajectories with confidence bands, even for subjects with very few longitudinal observations. PACE is a viable and flexible alternative to random effects modeling of longitudinal data. There is also a Matlab version (PACE) that contains some methods not available on fdapace and vice versa. Updates to fdapace were supported by grants from NIH Echo and NSF DMS-1712864 and DMS-2014626. Please cite our package if you use it (You may run the command citation("fdapace") to get the citation format and bibtex entry). References: Wang, J.L., Chiou, J., Müller, H.G. (2016) <doi:10.1146/annurev-statistics-041715-033624>; Chen, K., Zhang, X., Petersen, A., Müller, H.G. (2017) <doi:10.1007/s12561-015-9137-5>.
This package implements shape-based clustering algorithms for multidimensional longitudinal data based on the Fréchet distance. It implements two main methods: MFKmL (Multidimensional Fréchet distance-based K-means for Longitudinal data), an extension of the K-means algorithm using the Fréchet distance originally developed in the kmlShape package, adapted for multidimensional trajectories; and SFKmL (Sparse multidimensional Fréchet distance-based K-medoids for Longitudinal data), a K-medoids-based clustering algorithm that incorporates variable selection. These tools are designed to enhance clustering performance in high-dimensional longitudinal data settings, particularly those with time delays, variations in trajectory speed, irregular sampling intervals, and noise. This package implements methods derived from Kang et al. (2023) <doi:10.1007/s11222-023-10237-z>.
This package performs family-based association tests with a polytomous outcome under 2-locus and 1-locus models defined by some design matrix.
The FLEX method, developed by Yoon and Choi (2013) <doi:10.1007/978-3-642-33042-1_21>, performs least squares estimation for fuzzy predictors and outcomes, generating crisp regression coefficients by minimizing the distance between observed and predicted outcomes. It also provides functions for fuzzifying data and inference tasks, including significance testing, fit indices, and confidence interval estimation.
This package contains Probability Mass Functions, Cumulative Mass Functions, Negative Log Likelihood value, parameter estimation and modeling data using Binomial Mixture Distributions (BMD) (Manoj et al (2013) <doi:10.5539/ijsp.v2n2p24>) and Alternate Binomial Distributions (ABD) (Paul (1985) <doi:10.1080/03610928508828990>), also Journal article to use the package(<doi:10.21105/joss.01505>).
Nonparametric estimators and tests for time series analysis. The functions use bootstrap techniques and robust nonparametric difference-based estimators to test for the presence of possibly non-monotonic trends and for synchronicity of trends in multiple time series.
This package provides a fast and flexible implementation of Callaway and Sant'Anna's (2021)<doi:10.1016/j.jeconom.2020.12.001> staggered Difference-in-Differences (DiD) estimators, fastdid reduces the computation time from hours to seconds, and incorporates extensions such as time-varying covariates and multiple events.
This package provides a simplified interface to the Central Data Repository REST API service made available by the United States Federal Financial Institutions Examination Council ('FFIEC'). Contains functions to retrieve reports of Condition and Income (Call Reports) and Uniform Bank Performance Reports ('UBPR') in list or tidy data frame format for most FDIC insured institutions. See <https://cdr.ffiec.gov/public/Files/SIS611_-_Retrieve_Public_Data_via_Web_Service.pdf> for the official REST API documentation published by the FFIEC'.
This data contains a large variety of information on players and their current attributes on Fantasy Premier League <https://fantasy.premierleague.com/>. In particular, it contains a `next_gw_points` (next gameweek points) value for each player given their attributes in the current week. Rows represent player-gameweeks, i.e. for each player there is a row for each gameweek. This makes the data suitable for modelling a player's next gameweek points, given attributes such as form, total points, and cost at the current gameweek. This data can therefore be used to create Fantasy Premier League bots that may use a machine learning algorithm and a linear programming solver (for example) to return the best possible transfers and team to pick for each gameweek, thereby fully automating the decision making process in Fantasy Premier League. This function simply supplies the required data for such a task.
We present an implementation of the algorithms required to simulate large-scale social networks and retrieve their most relevant metrics. Details can be found in the accompanying scientific paper on the Journal of Statistical Software, <doi:10.18637/jss.v096.i07>.