Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Sequential Kalman filter for scalable online changepoint detection by temporally correlated data. It enables fast single and multiple change points with missing values. See the reference: Hanmo Li, Yuedong Wang, Mengyang Gu (2023), <arXiv:2310.18611>.
Function for the GUI API to interact with external IDE/code editors.
Build custom Europe SpatialPolygonsDataFrame, if you don't know what is a SpatialPolygonsDataFrame see SpatialPolygons() in sp', by example for mapLayout() in antaresViz'. Antares is a powerful software developed by RTE to simulate and study electric power systems (more information about Antares here: <https://antares-simulator.org/>).
Implementation of uniformity tests on the circle and (hyper)sphere. The main function of the package is unif_test(), which conveniently collects more than 35 tests for assessing uniformity on S^p-1 = x in R^p : ||x|| = 1, p >= 2. The test statistics are implemented in the unif_stat() function, which allows computing several statistics for different samples within a single call, thus facilitating Monte Carlo experiments. Furthermore, the unif_stat_MC() function allows parallelizing them in a simple way. The asymptotic null distributions of the statistics are available through the function unif_stat_distr(). The core of sphunif is coded in C++ by relying on the Rcpp package. The package also provides several novel datasets and gives the replicability for the data applications/simulations in Garcà a-Portugués et al. (2021) <doi:10.1007/978-3-030-69944-4_12>, Garcà a-Portugués et al. (2023) <doi:10.3150/21-BEJ1454>, Fernández-de-Marcos and Garcà a-Portugués (2024) <doi:10.1016/j.spl.2024.110218>, and Garcà a-Portugués et al. (2025) <doi:10.1080/01621459.2025.2566414>.
Semantic Versions allow for standardized management versions. This package implements semantic versioning handling in R. using R6 to create a mutable object that can handle deciphering and checking versions.
Recently, regularized variable selection has emerged as a powerful tool to identify and dissect gene-environment interactions. Nevertheless, in longitudinal studies with high dimensional genetic factors, regularization methods for GÃ E interactions have not been systematically developed. In this package, we provide the implementation of sparse group variable selection, based on both the quadratic inference function (QIF) and generalized estimating equation (GEE), to accommodate the bi-level selection for longitudinal GÃ E studies with high dimensional genomic features. Alternative methods conducting only the group or individual level selection have also been included. The core modules of the package have been developed in C++.
This package provides a compilation of functions designed to assist users on the correlation analysis of crop yield and soil test values. Functions to estimate crop response patterns to soil nutrient availability and critical soil test values using various approaches such as: 1) the modified arcsine-log calibration curve (Correndo et al. (2017) <doi:10.1071/CP16444>); 2) the graphical Cate-Nelson quadrants analysis (Cate & Nelson (1965)), 3) the statistical Cate-Nelson quadrants analysis (Cate & Nelson (1971) <doi:10.2136/sssaj1971.03615995003500040048x>), 4) the linear-plateau regression (Anderson & Nelson (1975) <doi:10.2307/2529422>), 5) the quadratic-plateau regression (Bullock & Bullock (1994) <doi:10.2134/agronj1994.00021962008600010033x>), and 6) the Mitscherlich-type exponential regression (Melsted & Peck (1977) <doi:10.2134/asaspecpub29.c1>). The package development stemmed from ongoing work with the Fertilizer Recommendation Support Tool (FRST) and Feed the Future Innovation Lab for Collaborative Research on Sustainable Intensification (SIIL) projects.
Data from statistical agencies and other institutions are mostly confidential. This package, introduced in Templ, Kowarik and Meindl (2017) <doi:10.18637/jss.v067.i04>, can be used for the generation of anonymized (micro)data, i.e. for the creation of public- and scientific-use files. The theoretical basis for the methods implemented can be found in Templ (2017) <doi:10.1007/978-3-319-50272-4>. Various risk estimation and anonymization methods are included. Note that the package includes a graphical user interface published in Meindl and Templ (2019) <doi:10.3390/a12090191> that allows to use various methods of this package.
Decompose a time series into seasonal, trend, and remainder components using an implementation of Seasonal Decomposition of Time Series by Loess (STL) that provides several enhancements over the STL method in the stats package. These enhancements include handling missing values, providing higher order (quadratic) loess smoothing with automated parameter choices, frequency component smoothing beyond the seasonal and trend components, and some basic plot methods for diagnostics.
This package contains an implementation of invariant causal prediction for sequential data. The main function in the package is seqICP', which performs linear sequential invariant causal prediction and has guaranteed type I error control. For non-linear dependencies the package also contains a non-linear method seqICPnl', which allows to input any regression procedure and performs tests based on a permutation approach that is only approximately correct. In order to test whether an individual set S is invariant the package contains the subroutines seqICP.s and seqICPnl.s corresponding to the respective main methods.
Connecting to databases requires boilerplate code to specify connection parameters and to set up sessions properly with the DBMS. This package provides a simple tool to fill two purposes: abstracting connection details, including secret credentials, out of your source code and managing configuration for frequently-used database connections in a persistent and flexible way, while minimizing requirements on the runtime environment.
Carries out a two-level sample selection where the possibility of an initially selected site not wanting to participate is anticipated, and the site is optimally replaced. The procedure aims to reduce bias (and/or loss of external validity) with respect to the target population. In selecting units and sub-units, sitepickR uses the cube method developed by Deville & Tillé', (2004) <http://www.math.helsinki.fi/msm/banocoss/Deville_Tille_2004.pdf> and described in Tillé (2011) <https://www150.statcan.gc.ca/n1/en/pub/12-001-x/2011002/article/11609-eng.pdf?st=5-sx8Q8n>. The cube method is a probability sampling method that is designed to satisfy criteria for balance between the sample and the population. Recent research has shown that this method performs well in simulations for studies of educational programs (see Fay & Olsen (2021, under review). To implement the cube method, sitepickR uses the sampling R package <https://cran.r-project.org/package=sampling>. To implement statistical matching, sitepickR uses the MatchIt R package <https://cran.r-project.org/package=MatchIt>.
Capable of deriving seasonal statistics, such as "normals", and analysis of seasonal data, such as departures. This package also has graphics capabilities for representing seasonal data, including boxplots for seasonal parameters, and bars for summed normals. There are many specific functions related to climatology, including precipitation normals, temperature normals, cumulative precipitation departures and precipitation interarrivals. However, this package is designed to represent any time-varying parameter with a discernible seasonal signal, such as found in hydrology and ecology.
This package provides a collection of functions to perform Detrended Fluctuation Analysis (DFA exponent), GUEDES et al. (2019) <doi:10.1016/j.physa.2019.04.132> , Detrended cross-correlation coefficient (RHODCCA), GUEDES & ZEBENDE (2019) <doi:10.1016/j.physa.2019.121286>, DMCA cross-correlation coefficient and Detrended multiple cross-correlation coefficient (DMC), GUEDES & SILVA-FILHO & ZEBENDE (2018) <doi:10.1016/j.physa.2021.125990>, both with sliding windows approach.
This package provides tools for manipulating sound files for bioacoustic analysis, and preparing analyses these for publication. The package validates that values are physically possible wherever feasible.
Exploratory analysis on any input data describing the structure and the relationships present in the data. The package automatically select the variable and does related descriptive statistics. Analyzing information value, weight of evidence, custom tables, summary statistics, graphical techniques will be performed for both numeric and categorical predictors.
Includes an interactive application designed to support educators in wide-ranging disciplines, with a particular focus on those teaching introductory statistical methods (descriptive and/or inferential) for data analysis. Users are able to randomly generate data, make new versions of existing data through common adjustments (e.g., add random normal noise and perform transformations), and check the suitability of the resulting data for statistical analyses.
The Statistical Learning Theory (SLT) provides the theoretical background to ensure that a supervised algorithm generalizes the mapping f:X -> Y given f is selected from its search space bias F. This formal result depends on the Shattering coefficient function N(F,2n) to upper bound the empirical risk minimization principle, from which one can estimate the necessary training sample size to ensure the probabilistic learning convergence and, most importantly, the characterization of the capacity of F, including its under and overfitting abilities while addressing specific target problems. In this context, we propose a new approach to estimate the maximal number of hyperplanes required to shatter a given sample, i.e., to separate every pair of points from one another, based on the recent contributions by Har-Peled and Jones in the dataset partitioning scenario, and use such foundation to analytically compute the Shattering coefficient function for both binary and multi-class problems. As main contributions, one can use our approach to study the complexity of the search space bias F, estimate training sample sizes, and parametrize the number of hyperplanes a learning algorithm needs to address some supervised task, what is specially appealing to deep neural networks. Reference: de Mello, R.F. (2019) "On the Shattering Coefficient of Supervised Learning Algorithms" <arXiv:1911.05461>; de Mello, R.F., Ponti, M.A. (2018, ISBN: 978-3319949888) "Machine Learning: A Practical Approach on the Statistical Learning Theory".
An algorithm to cluster satellite hot spot data spatially and temporally.
Provide regularized maximum covariance analysis incorporating smoothness, sparseness and orthogonality of couple patterns by using the alternating direction method of multipliers algorithm. The method can be applied to either regularly or irregularly spaced data, including 1D, 2D, and 3D (Wang and Huang, 2018 <doi:10.1002/env.2481>).
Obtain parameters of Svensson's Method, including percentage agreement, systematic change and individual change. Also, the contingency table can be generated. Svensson's Method is a rank-invariant nonparametric method for the analysis of ordered scales which measures the level of change both from systematic and individual aspects. For the details, please refer to Svensson E. Analysis of systematic and random differences between paired ordinal categorical data [dissertation]. Stockholm: Almqvist & Wiksell International; 1993.
Handles datetimes as integers for the usage inside Discrete-Event Simulations (DES). The conversion is made using the internally generic function as.numeric() of the base package. DES is described in Simulation Modeling and Analysis by Averill Law and David Kelton (1999) <doi:10.2307/2288169>.
Computes the maximum likelihood estimator of the generalised additive and index regression with shape constraints. Each additive component function is assumed to obey one of the nine possible shape restrictions: linear, increasing, decreasing, convex, convex increasing, convex decreasing, concave, concave increasing, or concave decreasing. For details, see Chen and Samworth (2016) <doi:10.1111/rssb.12137>.
Semiparametric and parametric estimation of INAR models including a finite sample refinement (Faymonville et al. (2022) <doi:10.1007/s10260-022-00655-0>) for the semiparametric setting introduced in Drost et al. (2009) <doi:10.1111/j.1467-9868.2008.00687.x>, different procedures to bootstrap INAR data (Jentsch, C. and Weià , C.H. (2017) <doi:10.3150/18-BEJ1057>) and flexible simulation of INAR data.