Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Allows shiny developers to incorporate UI elements based on Google's Material design. See <https://material.io/guidelines/> for more information.
The main function is icweib(), which fits a stratified Weibull proportional hazards model for left censored, right censored, interval censored, and non-censored survival data. We parameterize the Weibull regression model so that it allows a stratum-specific baseline hazard function, but where the effects of other covariates are assumed to be constant across strata. Please refer to Xiangdong Gu, David Shapiro, Michael D. Hughes and Raji Balasubramanian (2014) <doi:10.32614/RJ-2014-003> for more details.
Semiparametric Estimation of Stochastic Frontier Models following a two step procedure: in the first step semiparametric or nonparametric regression techniques are used to relax parametric restrictions of the functional form representing technology and in the second step variance parameters are obtained by pseudolikelihood estimators or by method of moments.
This package creates stratum orthogonal arrays (also known as strong orthogonal arrays). These are arrays with more levels per column than the typical orthogonal array, and whose low order projections behave like orthogonal arrays, when collapsing levels to coarser strata. Details are described in Groemping (2022) "A unifying implementation of stratum (aka strong) orthogonal arrays" <http://www1.bht-berlin.de/FB_II/reports/Report-2022-002.pdf>.
This package implements S-type ridge regression, a robust and multicollinearity-aware linear regression estimator that combines S-type robust weighting (via the Stype.est package) with ridge penalization; automatically selects the ridge parameter using the ridgregextra approach targeting a close to 1 variance inflation factor (VIF), and returns comprehensive outputs (coefficients, fitted values, residuals, mean squared error (MSE), etc.) with an easy x/y interface and optional user-supplied weights. See Sazak and Mutlu (2021) <doi:10.1080/03610918.2021.1928196>, Karadag et al. (2023) <https://CRAN.R-project.org/package=ridgregextra> and Sazak et al. (2025) <https://CRAN.R-project.org/package=Stype.est>.
We present a rank-based Mercer kernel to compute a pair-wise similarity metric corresponding to informative representation of data. We tailor the development of a kernel to encode our prior knowledge about the data distribution over a probability space. The philosophical concept behind our construction is that objects whose feature values fall on the extreme of that featureâ s probability mass distribution are more similar to each other, than objects whose feature values lie closer to the mean. Semblance emphasizes features whose values lie far away from the mean of their probability distribution. The kernel relies on properties empirically determined from the data and does not assume an underlying distribution. The use of feature ranks on a probability space ensures that Semblance is computational efficacious, robust to outliers, and statistically stable, thus making it widely applicable algorithm for pattern analysis. The output from the kernel is a square, symmetric matrix that gives proximity values between pairs of observations.
This package implements algorithms for terrestrial, mobile, and airborne lidar processing, tree detection, segmentation, and attribute estimation (Donager et al., 2021) <doi:10.3390/rs13122297>, and a hierarchical patch delineation algorithm PatchMorph (Girvetz & Greco, 2007) <doi:10.1007/s10980-007-9104-8>. Tree detection uses rasterized point cloud metrics (relative neighborhood density and verticality) combined with RANSAC cylinder fitting to locate tree boles and estimate diameter at breast height. Tree segmentation applies graph-theory approaches inspired by Tao et al. (2015) <doi:10.1016/j.isprsjprs.2015.08.007> with cylinder fitting methods from de Conto et al. (2017) <doi:10.1016/j.compag.2017.07.019>. PatchMorph delineates habitat patches across spatial scales using organism-specific thresholds. Built on lidR (Roussel et al., 2020) <doi:10.1016/j.rse.2020.112061>.
This package provides a framework for data stream modeling and associated data mining tasks such as clustering and classification. The development of this package was supported in part by NSF IIS-0948893, NSF CMMI 1728612, and NIH R21HG005912. Hahsler et al (2017) <doi:10.18637/jss.v076.i14>.
Local Correlation Integral (LOCI) method for outlier identification is implemented here. The LOCI method developed here is invented in Breunig, et al. (2000), see <doi:10.1145/342009.335388>.
This package provides functions for performing set-theoretic multi-method research, QCA for clustered data, theory evaluation, Enhanced Standard Analysis, indirect calibration, radar visualisations. Additionally it includes data to replicate the examples in the books by Oana, I.E, C. Q. Schneider, and E. Thomann. Qualitative Comparative Analysis (QCA) using R: A Beginner's Guide. Cambridge University Press and C. Q. Schneider and C. Wagemann "Set Theoretic Methods for the Social Sciences", Cambridge University Press.
From output files obtained from the software ModestR', the relative contribution of factors to explain species distribution is depicted using several plots. A global geographic raster file for each environmental variable may be also obtained with the mean relative contribution, considering all species present in each raster cell, of the factor to explain species distribution. Finally, for each variable it is also possible to compare the frequencies of any variable obtained in the cells where the species is present with the frequencies of the same variable in the cells of the extent.
The developed function is designed for the generation of spatial grids based on user-specified longitude and latitude coordinates. The function first validates the input longitude and latitude values, ensuring they fall within the appropriate geographic ranges. It then creates a polygon from the coordinates and determines the appropriate Universal Transverse Mercator zone based on the provided hemisphere and longitude values. Subsequently, transforming the input Shapefile to the Universal Transverse Mercator projection when necessary. Finally, a spatial grid is generated with the specified interval and saved as a Shapefile. For method details see, Brus,D.J.(2022).<DOI:10.1201/9781003258940>. The function takes into account crucial parameters such as the hemisphere (north or south), desired grid interval, and the output Shapefile path. The developed function is an efficient tool, simplifying the process of empty spatial grid generation for applications such as, geo-statistical analysis, digital soil mapping product generation, etc. Whether for environmental studies, urban planning, or any other geo-spatial analysis, this package caters to the diverse needs of users working with spatial data, enhancing the accessibility and ease of spatial data processing and visualization.
This package implements methods for obtaining kernel density estimates subject to a variety of shape constraints (unimodality, bimodality, symmetry, tail monotonicity, bounds, and constraints on the number of inflection points). Enforcing constraints can eliminate unwanted waves or kinks in the estimate, which improves its subjective appearance and can also improve statistical performance. The main function scdensity() is very similar to the density() function in stats', allowing shape-restricted estimates to be obtained with little effort. The methods implemented in this package are described in Wolters and Braun (2017) <doi:10.1080/03610918.2017.1288247>, Wolters (2012) <doi:10.18637/jss.v047.i06>, and Hall and Huang (2002) <https://www3.stat.sinica.edu.tw/statistica/j12n4/j12n41/j12n41.htm>. See the scdensity() help for for full citations.
Estimate the regression coefficients and the baseline hazard of proportional hazard Cox models with left, right or interval censored survival data using maximum penalised likelihood. A non-parametric smooth estimate of the baseline hazard function is provided.
Estimates area and subarea level proportions using the Small Area Estimation (SAE) Twofold Subarea Model with a hierarchical Bayesian (HB) approach under Beta distribution. A number of simulated datasets generated for illustration purposes are also included. The rstan package is employed to estimate parameters via the Hamiltonian Monte Carlo and No U-Turn Sampler algorithm. The model-based estimators include the HB mean, the variation of the mean, and quantiles. For references, see Rao and Molina (2015) <doi:10.1002/9781118735855>, Torabi and Rao (2014) <doi:10.1016/j.jmva.2014.02.001>, Leyla Mohadjer et al.(2007) <http://www.asasrms.org/Proceedings/y2007/Files/JSM2007-000559.pdf>, Erciulescu et al.(2019) <doi:10.1111/rssa.12390>, and Yudasena (2024).
This package implements a test for distinguishing between true long memory and spurious long memory. Reference: Qu, Z. (2011). "A Test Against Spurious Long Memory." Journal of Business & Economic Statistics, 29(3), 423â 438. <doi:10.1198/jbes.2010.09153>.
Documentation and prototypes for the earliest (circa 2010) open-source effort to reverse engineer the sas7bdat file format. The package includes a prototype reader for sas7bdat files. However, newer packages may contain more robust readers for sas7bdat files.
This package provides a collection of self-labeled techniques for semi-supervised classification. In semi-supervised classification, both labeled and unlabeled data are used to train a classifier. This learning paradigm has obtained promising results, specifically in the presence of a reduced set of labeled examples. This package implements a collection of self-labeled techniques to construct a classification model. This family of techniques enlarges the original labeled set using the most confident predictions to classify unlabeled data. The techniques implemented can be applied to classification problems in several domains by the specification of a supervised base classifier. At low ratios of labeled data, it can be shown to perform better than classical supervised classifiers.
Penalized and non-penalized maximum likelihood estimation of smooth transition vector autoregressive models with various types of transition weight functions, conditional distributions, and identification methods. Constrained estimation with various types of constraints is available. Residual based model diagnostics, forecasting, simulations, counterfactual analysis, and computation of impulse response functions, generalized impulse response functions, generalized forecast error variance decompositions, as well as historical decompositions. See Heather Anderson, Farshid Vahid (1998) <doi:10.1016/S0304-4076(97)00076-6>, Helmut Lütkepohl, Aleksei Netšunajev (2017) <doi:10.1016/j.jedc.2017.09.001>, Markku Lanne, Savi Virolainen (2025) <doi:10.1016/j.jedc.2025.105162>, Savi Virolainen (2025) <doi:10.48550/arXiv.2404.19707>.
It provides the density and random number generator for the Scale-Shape Mixtures of Skew-Normal Distributions proposed by Jamalizadeh and Lin (2016) <doi:10.1007/s00180-016-0691-1>.
Bayesian clustering of spatial regions with similar functional shapes using spanning trees and latent Gaussian models. The method enforces spatial contiguity within clusters and supports a wide range of latent Gaussian models, including non-Gaussian likelihoods, via the R-INLA framework. The algorithm is based on Zhong, R., Chacón-Montalván, E. A., and Moraga, P. (2024) <doi:10.48550/arXiv.2407.12633>, extending the approach of Zhang, B., Sang, H., Luo, Z. T., and Huang, H. (2023) <doi:10.1214/22-AOAS1643>. The package includes tools for model fitting, convergence diagnostics, visualization, and summarization of clustering results.
Do multi-gene descent probabilities (Thompson, 1983, <doi:10.1098/rspb.1983.0072>) and special cases thereof (Thompson, 1986, <doi:10.1002/zoo.1430050210>) including inbreeding and kinship coefficients. But does much more: probabilities of any set of genes descending from any other set of genes.
Statistical methods for the modeling and monitoring of time series of counts, proportions and categorical data, as well as for the modeling of continuous-time point processes of epidemic phenomena. The monitoring methods focus on aberration detection in count data time series from public health surveillance of communicable diseases, but applications could just as well originate from environmetrics, reliability engineering, econometrics, or social sciences. The package implements many typical outbreak detection procedures such as the (improved) Farrington algorithm, or the negative binomial GLR-CUSUM method of Hoehle and Paul (2008) <doi:10.1016/j.csda.2008.02.015>. A novel CUSUM approach combining logistic and multinomial logistic modeling is also included. The package contains several real-world data sets, the ability to simulate outbreak data, and to visualize the results of the monitoring in a temporal, spatial or spatio-temporal fashion. A recent overview of the available monitoring procedures is given by Salmon et al. (2016) <doi:10.18637/jss.v070.i10>. For the retrospective analysis of epidemic spread, the package provides three endemic-epidemic modeling frameworks with tools for visualization, likelihood inference, and simulation. hhh4() estimates models for (multivariate) count time series following Paul and Held (2011) <doi:10.1002/sim.4177> and Meyer and Held (2014) <doi:10.1214/14-AOAS743>. twinSIR() models the susceptible-infectious-recovered (SIR) event history of a fixed population, e.g, epidemics across farms or networks, as a multivariate point process as proposed by Hoehle (2009) <doi:10.1002/bimj.200900050>. twinstim() estimates self-exciting point process models for a spatio-temporal point pattern of infective events, e.g., time-stamped geo-referenced surveillance data, as proposed by Meyer et al. (2012) <doi:10.1111/j.1541-0420.2011.01684.x>. A recent overview of the implemented space-time modeling frameworks for epidemic phenomena is given by Meyer et al. (2017) <doi:10.18637/jss.v077.i11>.
This package performs simulation and inference of diffusion processes on circle. Stochastic correlation models based on circular diffusion models are provided. For details see Majumdar, S. and Laha, A.K. (2024) "Diffusion on the circle and a stochastic correlation model" <doi:10.48550/arXiv.2412.06343>.