Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Converts the dates to different SAS date formats. In SAS dates are a special case of numeric values. Each day is assigned a specific numeric value, starting from January 1, 1960. This date is assigned the date value 0, and the next date has a date value of 1 and so on. The previous days to this date are represented by -1 , -2 and so on. With this approach, SAS can represent any date in the future or any date in the past. There are many date formats used in SAS to represent date-time. Here, we try to develop functions which will convert the date to different SAS date formats.
This package provides a tool that makes estimating models in state space form a breeze. See "Time Series Analysis by State Space Methods" by Durbin and Koopman (2012, ISBN: 978-0-19-964117-8) for details about the algorithms implemented.
Compare directories flexibly (by date, content, or both) and synchronize files efficiently, with asymmetric and symmetric modes, helper tools, and visualization support for file management.
Conduct latent trajectory class analysis with longitudinal data. Our method supports longitudinal continuous, binary and count data. For more methodological details, please refer to Hart, K.R., Fei, T. and Hanfelt, J.J. (2020), Scalable and robust latent trajectory class analysis using artificial likelihood. Biometrics <doi:10.1111/biom.13366>.
Empirical likelihood methods for asymptotically efficient estimation of models based on conditional or unconditional moment restrictions; see Kitamura, Tripathi & Ahn (2004) <doi:10.1111/j.1468-0262.2004.00550.x> and Owen (2013) <doi:10.1002/cjs.11183>. Kernel-based non-parametric methods for density/regression estimation and numerical routines for empirical likelihood maximisation are implemented in Rcpp for speed.
This package provides a statistical learning method to simultaneously predict a range of target phenotypes using codified and natural language processing (NLP)-derived Electronic Health Record (EHR) data. See Ahuja et al (2020) JAMIA <doi:10.1093/jamia/ocaa079> for details.
Detect libraries used in a project and automatically create software bibliographies in PDF', Word', Rmarkdown', and BibTeX formats.
Set of functions for Stochastic Data Envelopment Analysis. Chance constrained versions of radial, directional and additive DEA models are implemented, as long as super-efficiency models. See: Cooper, W.W.; Deng, H.; Huang, Z.; Li, S.X. (2002). <doi:10.1057/palgrave.jors.2601433>, Bolós, V.J.; Benà tez, R.; Coll-Serrano, V. (2024) <doi:10.1016/j.orp.2024.100307>.
This package provides a powerful and flexible tool for visualizing proportional data across spatially resolved contexts. By combining the concepts of scatter plots and stacked bar charts, scatterbar allows users to create scattered bar chart plots, which effectively display the proportions of different categories at each (x, y) location. This visualization is particularly useful for applications where understanding the distribution of categories across spatial coordinates is essential. This package features automatic determination of optimal scaling factors based on data, customizable scaling and padding options for both x and y axes, flexibility to specify custom colors for each category, options to customize the legend title, and integration with ggplot2 for robust and high-quality visualizations. For more details, see Velazquez et al. (2024) <doi:10.1101/2024.08.14.606810>.
Extends the SelectBoost approach to Generalized Additive Models for Location, Scale and Shape (GAMLSS). Implements bootstrap stability-selection across parameter-specific formulas (mu, sigma, nu, tau) via gamlss::stepGAIC(). Includes optional standardization of predictors and helper functions for corrected AIC calculation. More details can be found in Bertrand and Maumy (2024) <https://hal.science/hal-05352041> that highlights correlation-aware resampling to improve variable selection for GAMLSS and quantile regression when predictors are numerous and highly correlated.
Allows the creation and manipulation of C++ std::vector's in R.
Providing convenience functions to connect R with the Spotify application programming interface ('API'). At first it aims to help setting up the OAuth2.0 Authentication flow. The default output of the get_*() functions is tidy, but optionally the functions could return the raw response from the API as well. The search_*() and get_*() functions can be combined. See the vignette for more information and examples and the official Spotify for Developers website <https://developer.spotify.com/documentation/web-api/> for information about the Web API'.
Approximate Bayesian regularization using Gaussian approximations. The input is a vector of estimates and a Gaussian error covariance matrix of the key parameters. Bayesian shrinkage is then applied to obtain parsimonious solutions. The method is described on Karimova, van Erp, Leenders, and Mulder (2024) <DOI:10.31234/osf.io/2g8qm>. Gibbs samplers are used for model fitting. The shrinkage priors that are supported are Gaussian (ridge) priors, Laplace (lasso) priors (Park and Casella, 2008 <DOI:10.1198/016214508000000337>), and horseshoe priors (Carvalho, et al., 2010; <DOI:10.1093/biomet/asq017>). These priors include an option for grouped regularization of different subsets of parameters (Meier et al., 2008; <DOI:10.1111/j.1467-9868.2007.00627.x>). F priors are used for the penalty parameters lambda^2 (Mulder and Pericchi, 2018 <DOI:10.1214/17-BA1092>). This correspond to half-Cauchy priors on lambda (Carvalho, Polson, Scott, 2010 <DOI:10.1093/biomet/asq017>).
This package provides two methods for segmentation and joint segmentation/clustering of bivariate time-series. Originally intended for ecological segmentation (home-range and behavioural modes) but easily applied on other series, the package also provides tools for analysing outputs from R packages moveHMM and marcher'. The segmentation method is a bivariate extension of Lavielle's method available in adehabitatLT (Lavielle, 1999 <doi:10.1016/S0304-4149(99)00023-X> and 2005 <doi:10.1016/j.sigpro.2005.01.012>). This method rely on dynamic programming for efficient segmentation. The segmentation/clustering method alternates steps of dynamic programming with an Expectation-Maximization algorithm. This is an extension of Picard et al (2007) <doi:10.1111/j.1541-0420.2006.00729.x> method (formerly available in cghseg package) to the bivariate case. The method is fully described in Patin et al (2018) <doi:10.1101/444794>.
Minimal R client for the Screenshotbase API to render website screenshots and query account status. Provides functions to set the API key, call the status endpoint, and take a screenshot as a raw image response.
Data practitioners regularly use the R and Python programming languages to prepare data for analyses. Thus, they encode important data preprocessing decisions in R and Python code. The smallsets package subsequently decodes these decisions into a Smallset Timeline, a static, compact visualisation of data preprocessing decisions (Lucchesi et al. (2022) <doi:10.1145/3531146.3533175>). The visualisation consists of small data snapshots of different preprocessing steps. The smallsets package builds this visualisation from a user's dataset and preprocessing code located in an R', R Markdown', Python', or Jupyter Notebook file. Users simply add structured comments with snapshot instructions to the preprocessing code. One optional feature in smallsets requires installation of the Gurobi optimisation software and gurobi R package, available from <https://www.gurobi.com>. More information regarding the optional feature and gurobi installation can be found in the smallsets vignette.
This package provides tools for generating and analyzing simulation studies. Users may easily specify all terms of a simulation study, often in a single line of code. Common univariate and bivariate methods, such as t tests, proportions tests, and chi squared tests, are integrated. Multivariate studies involving linear or logistic regression may also be specified with symbolic inputs. The simulation studies generate data for n observations in each of B experiments. Analyses of each experiment are integrated, and empirical results across the experiments are also provided.
Scale invariant version of the original PNN proposed by Specht (1990) <doi:10.1016/0893-6080(90)90049-q> with the added functionality of allowing for smoothing along multiple dimensions while accounting for covariances within the data set. It is written in the R statistical programming language. Given a data set with categorical variables, we use this algorithm to estimate the probabilities of a new observation vector belonging to a specific category. This type of neural network provides the benefits of fast training time relative to backpropagation and statistical generalization with only a small set of known observations.
This package provides utilities for cleaning survey data, computing weights, and performing descriptive statistical analysis. Methods follow Lohr (2019, ISBN:978-0367272454) "Sampling: Design and Analysis" and Lumley (2010) <doi:10.1002/9780470580066>.
This package provides a framework for extracting semantic motifs around entities in textual data. It implements an entity-centered semantic grammar that distinguishes six classes of motifs: actions of an entity, treatments of an entity, agents acting upon an entity, patients acted upon by an entity, characterizations of an entity, and possessions of an entity. Motifs are identified by applying a set of extraction rules to a parsed text object that includes part-of-speech tags and dependency annotations - such as those generated by spacyr'. For further reference, see: Stuhler (2022) <doi: 10.1177/00491241221099551>.
Conduct various tests for evaluating implicit biases in word embeddings: Word Embedding Association Test (Caliskan et al., 2017), <doi:10.1126/science.aal4230>, Relative Norm Distance (Garg et al., 2018), <doi:10.1073/pnas.1720347115>, Mean Average Cosine Similarity (Mazini et al., 2019) <arXiv:1904.04047>, SemAxis (An et al., 2018) <arXiv:1806.05521>, Relative Negative Sentiment Bias (Sweeney & Najafian, 2019) <doi:10.18653/v1/P19-1162>, and Embedding Coherence Test (Dev & Phillips, 2019) <arXiv:1901.07656>.
This package provides a set of functions to quantify and visualise social autocorrelation.
Get sun position, sunlight phases (times for sunrise, sunset, dusk, etc.), moon position and lunar phase for the given location and time. Most calculations are based on the formulas given in Astronomy Answers articles about position of the sun and the planets : <https://www.aa.quae.nl/en/reken/zonpositie.html>.
Efficient Markov chain Monte Carlo (MCMC) algorithms for fully Bayesian estimation of dynamic survival models with shrinkage priors. Details on the algorithms used are provided in Wagner (2011) <doi:10.1007/s11222-009-9164-5>, Bitto and Frühwirth-Schnatter (2019) <doi:10.1016/j.jeconom.2018.11.006> and Cadonna et al. (2020) <doi:10.3390/econometrics8020020>.