Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Wrangle and annotate different types of political texts. It also introduces Urgency Analysis, a new method for the analysis of urgency in political texts.
Reconstruction of paleoclimate niches using phylogenetic comparative methods and projection reconstructed niches onto paleoclimate maps. The user can specify various models of trait evolution or estimate the best fit model, include fossils, use one or multiple phylogenies for inference, and make animations of shifting suitable habitat through time. This model was first used in Lawing and Polly (2011), and further implemented in Lawing et al (2016) and Rivera et al (2020). Lawing and Polly (2011) <doi:10.1371/journal.pone.0028554> "Pleistocene climate, phylogeny and climate envelope models: An integrative approach to better understand species response to climate change" Lawing et al (2016) <doi:10.1086/687202> "Including fossils in phylogenetic climate reconstructions: A deep time perspective on the climatic niche evolution and diversification of spiny lizards (Sceloporus)" Rivera et al (2020) <doi:10.1111/jbi.13915> "Reconstructing historical shifts in suitable habitat of Sceloporus lineages using phylogenetic niche modelling.".
Google Trends provides cross-sectional and time-series data on searches, but lacks readily available longitudinal data. Researchers, who want to create longitudinal Google Trends on their own, face practical challenges, such as normalized counts that make it difficult to combine cross-sectional and time-series data and limitations in data formats and timelines that limit data granularity over extended time periods. This package addresses these issues and enables researchers to generate longitudinal Google Trends data. This package is built on pytrends', a Python library that acts as the unofficial Google Trends API to collect Google Trends data. As long as the Google Trends API', pytrends and all their dependencies are working, this package will work. During testing, we noticed that for the same input (keyword, topic, data_format, timeline), the output index can vary from time to time. Besides, if the keyword is not very popular, then the resulting dataset will contain a lot of zeros, which will greatly affect the final result. While this package has no control over the accuracy or quality of Google Trends data, once the data is created, this package coverts it to longitudinal data. In addition, the user may encounter a 429 Too Many Requests error when using cross_section() and time_series() to collect Google Trends data. This error indicates that the user has exceeded the rate limits set by the Google Trends API'. For more information about the Google Trends API - pytrends', visit <https://pypi.org/project/pytrends/>.
Utilities for the Pareto, piecewise Pareto and generalized Pareto distribution that are useful for reinsurance pricing. In particular, the package provides a non-trivial algorithm that can be used to match the expected losses of a tower of reinsurance layers with a layer-independent collective risk model. The theoretical background of the matching algorithm and most other methods are described in Ulrich Riegel (2018) <doi:10.1007/s13385-018-0177-3>.
Measures real distances in pictures. With PDM() function, you can choose one *.jpg file, select the measure in mm of scale, starting and and finishing point in the graphical scale, the name of the measure, and starting and and finishing point of the measures. After, ask the user for a new measure.
Metadata and package cache for CRAN-like repositories. This is a utility package to be used by package management tools that want to take advantage of caching.
R package to compute Incoming Solar Radiation (insolation) for palaeoclimate studies. Features three solutions: Berger (1978), Berger and Loutre (1991) and Laskar et al. (2004). Computes daily-mean, season-averaged and annual means and for all latitudes, and polar night dates.
Automated backtesting of multiple portfolios over multiple datasets of stock prices in a rolling-window fashion. Intended for researchers and practitioners to backtest a set of different portfolios, as well as by a course instructor to assess the students in their portfolio design in a fully automated and convenient manner, with results conveniently formatted in tables and plots. Each portfolio design is easily defined as a function that takes as input a window of the stock prices and outputs the portfolio weights. Multiple portfolios can be easily specified as a list of functions or as files in a folder. Multiple datasets can be conveniently extracted randomly from different markets, different time periods, and different subsets of the stock universe. The results can be later assessed and ranked with tables based on a number of performance criteria (e.g., expected return, volatility, Sharpe ratio, drawdown, turnover rate, return on investment, computational time, etc.), as well as plotted in a number of ways with nice barplots and boxplots.
An efficient tool designed for differential analysis of large-scale RNA sequencing (RNAseq) data and Bisulfite sequencing (BSseq) data in the presence of individual relatedness and population structure. PQLseq first fits a Generalized Linear Mixed Model (GLMM) with adjusted covariates, predictor of interest and random effects to account for population structure and individual relatedness, and then performs Wald tests for each gene in RNAseq or site in BSseq.
This package implements the algorithm of Christensen (2024) <doi:10.1214/22-BA1353> for estimating marginal likelihoods via permutation counting.
This package contains statistical inference tools applied to Partial Linear Regression (PLR) models. Specifically, point estimation, confidence intervals estimation, bandwidth selection, goodness-of-fit tests and analysis of covariance are considered. Kernel-based methods, combined with ordinary least squares estimation, are used and time series errors are allowed. In addition, these techniques are also implemented for both parametric (linear) and nonparametric regression models.
User friendly functions for power and sample size analysis at one-way and two-way ANOVA settings take either effect size or delta and sigma as arguments. They are designed for both one-way and two-way ANOVA settings. In addition, a function for plotting power curves is available for power comparison, which can be easily visualized by statisticians and clinical researchers.
In ancient Roman mythology, Pluto was the ruler of the underworld and presides over the afterlife. Pluto was frequently conflated with Plutus', the god of wealth, because mineral wealth was found underground. When plotting with R, you try once, twice, practice again and again, and finally you get a pretty figure you want. It's a plot tour', a tour about repetition and reward. Hope plutor helps you on the tour!
Prediction limits for the Poisson distribution are produced from both frequentist and Bayesian viewpoints. Limiting results are provided in a Bayesian setting with uniform, Jeffreys and gamma as prior distributions. More details on the methodology are discussed in Bejleri and Nandram (2018) <doi:10.1080/03610926.2017.1373814> and Bejleri, Sartore and Nandram (2021) <doi:10.1007/s42952-021-00157-x>.
Computes predicted probabilities and marginal effects for binary & ordinal logit and probit, (partial) generalized ordinal & multinomial logit models estimated with the glm(), clm() (in the ordinal package), and vglm() (in the VGAM package) functions.
Makes the time series prediction easier by automatizing this process using four main functions: prep(), modl(), pred() and postp(). Features different preprocessing methods to homogenize variance and to remove trend and seasonality. Also has the potential to bring together different predictive models to make comparatives. Features ARIMA and Data Mining Regression models (using caret).
This package provides functions to estimate and plot smooth or linear population trends, or population indices, from animal or plant count survey data.
This package provides a lightweight yet powerful framework for building robust data analysis pipelines. With pipeflow', you initialize a pipeline with your dataset and construct workflows step by step by adding R functions. You can modify, remove, or insert steps and parameters at any stage, while pipeflow ensures the pipeline's integrity. Overall, this package offers a beginner-friendly framework that simplifies and streamlines the development of data analysis pipelines by making them modular, intuitive, and adaptable.
Robust penalized (adaptive) elastic net S and M estimators for linear regression. The methods are proposed in Cohen Freue, G. V., Kepplinger, D., Salibián-Barrera, M., and Smucler, E. (2019) <https://projecteuclid.org/euclid.aoas/1574910036>. The package implements the extensions and algorithms described in Kepplinger, D. (2020) <doi:10.14288/1.0392915>.
Understanding the dynamics of potentially heterogeneous variables is important in statistical applications. This package provides tools for estimating the degree of heterogeneity across cross-sectional units in the panel data analysis. The methods are developed by Okui and Yanagi (2019) <doi:10.1016/j.jeconom.2019.04.036> and Okui and Yanagi (2020) <doi:10.1093/ectj/utz019>.
This package provides a function to estimate panel-corrected standard errors. Data may contain balanced or unbalanced panels.
This package contains functions to compute and plot confidence distributions, confidence densities, p-value functions and s-value (surprisal) functions for several commonly used estimates. Instead of just calculating one p-value and one confidence interval, p-value functions display p-values and confidence intervals for many levels thereby allowing to gauge the compatibility of several parameter values with the data. These methods are discussed by Infanger D, Schmidt-Trucksäss A. (2019) <doi:10.1002/sim.8293>; Poole C. (1987) <doi:10.2105/AJPH.77.2.195>; Schweder T, Hjort NL. (2002) <doi:10.1111/1467-9469.00285>; Bender R, Berg G, Zeeb H. (2005) <doi:10.1002/bimj.200410104> ; Singh K, Xie M, Strawderman WE. (2007) <doi:10.1214/074921707000000102>; Rothman KJ, Greenland S, Lash TL. (2008, ISBN:9781451190052); Amrhein V, Trafimow D, Greenland S. (2019) <doi:10.1080/00031305.2018.1543137>; Greenland S. (2019) <doi:10.1080/00031305.2018.1529625> and Rafi Z, Greenland S. (2020) <doi:10.1186/s12874-020-01105-9>.
Estimates unsupervised outlier probabilities for multivariate numeric data with many observations from a nonparametric outlier statistic.
Create random passwords of letters, numbers and punctuation.