Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Various functions and algorithms are provided here for solving optimal matching tasks in the context of preclinical cancer studies. Further, various helper and plotting functions are provided for unsupervised and supervised machine learning as well as longitudinal mixed-effects modeling of tumor growth response patterns.
This package provides tools for estimating sample sizes primarily based on heritability, while also considering additional parameters such as statistical power and fold change. The package normalizes heritability values according to trait-specific heritability and classification to enhance accuracy in sample size estimation.
The Tweedie lasso model implements an iteratively reweighed least square (IRLS) strategy that incorporates a blockwise majorization decent (BMD) method, for efficiently computing solution paths of the (grouped) lasso and the (grouped) elastic net methods.
There are two interesting games in this package, one is 2048 games(for windows), using up and down to control the direction until there is a 2048 figure. And the other is what to eat today',preparing for people who choose difficulties, including most of the delicious Cantonese cuisine.
This package provides univariate and indexed (multivariate) nonparametric smoothed kernel estimators for the future conditional hazard rate function when time-dependent covariates are present, a bandwidth selector for the estimator's implementation and pointwise and uniform confidence bands. Methods used in the package refer to Bagkavos, Isakson, Mammen, Nielsen and Proust-Lima (2025) <doi:10.1093/biomet/asaf008>.
Create dynamic, data-driven text. Given two values, a list of talking points is generated and can be combined using string interpolation. Based on the glue package.
This package provides a collection of datasets of human-computer interaction (HCI) experiments. Each dataset is from an HCI paper, with all fields described and the original publication linked. All paper authors of included data have consented to the inclusion of their data in this package. The datasets include data from a range of HCI studies, such as pointing tasks, user experience ratings, and steering tasks. Dataset sources: Bergström et al. (2022) <doi:10.1145/3490493>; Dalsgaard et al. (2021) <doi:10.1145/3489849.3489853>; Larsen et al. (2019) <doi:10.1145/3338286.3340115>; Lilija et al. (2019) <doi:10.1145/3290605.3300676>; Pohl and Murray-Smith (2013) <doi:10.1145/2470654.2481307>; Pohl and Mottelson (2022) <doi:10.3389/frvir.2022.719506>.
Computes the ACMIF test and Bonferroni-adjusted p-value of interaction in two-factor studies. Produces corresponding interaction plot and analysis of variance tables and p-values from several other tests of non-additivity.
We provide a toolbox to conduct a Bayesian meta-analysis for estimating the current expansion rate of the Universe, called the Hubble constant H0, via time delay cosmography. The input data are Fermat potential difference and time delay estimates. For a robust inference, we assume a Student's t error for these inputs. Given these inputs, the meta-analysis produces posterior samples of the model parameters including the Hubble constant via Metropolis-Hastings within Gibbs. The package provides an option to implement repelling-attracting Metropolis-Hastings within Gibbs in a case where the parameter space has multiple modes.
Several functions that allow by different methods to infer a piecewise polynomial regression model under regularity constraints, namely continuity or differentiability of the link function. The implemented functions are either specific to data with two regimes, or generic for any number of regimes, which can be given by the user or learned by the algorithm. A paper describing all these methods will be submitted soon. The reference will be added to this file as soon as available.
Texts for H.C. Andersens fairy tales, ready for text analysis. Fairy tales in German, Danish, English, Spanish and French.
Calculates a suite of hydrologic indices for daily time series data that are widely used in hydrology and stream ecology.
For supersonic aircraft, flying subsonic over land, find the best route between airports. Allow for coastal buffer and potentially closed regions. Use a minimal model of aircraft performance: the focus is on time saved versus subsonic flight, rather than on vertical flight profile. For modelling and forecasting, not for planning your flight!
Conducts analyses for healthcare program evaluations or intervention studies. Calculates regression analyses for standard ordinary least squares (OLS or linear) or logistic models. Performs regression models used for causal modeling such as differences-in-differences (DID) and interrupted time series (ITS) models. Provides limited interpretations of model results and a ranking of variable importance in models. Performs propensity score models, top-coding of model outcome variables, and can return new data with the newly formed variables. Also performs Cronbach's alpha for various scale items (e.g., survey questions). See Github URL for examples in the README file. For more details on the statistical methods, see Allen & Yen (1979, ISBN:0-8185-0283-5), Angrist & Pischke (2009, ISBN:9780691120355), Harrell (2016, ISBN:978-3-319-19424-0), Kline (1999, ISBN:9780415211581), Linden (2015) <doi:10.1177/1536867X1501500208>, Merlo (2006) <doi:10.1136/jech.2004.029454> Muthen & Satorra (1995) <doi:10.2307/271070>, and Rabe-Hesketh & Skrondal (2008, ISBN:978-1-59718-040-5).
The heterogeneous multi-task feature learning is a data integration method to conduct joint feature selection across multiple related data sets with different distributions. The algorithm can combine different types of learning tasks, including linear regression, Huber regression, adaptive Huber, and logistic regression. The modified version of Bayesian Information Criterion (BIC) is produced to measure the model performance. Package is based on Yuan Zhong, Wei Xu, and Xin Gao (2022) <https://www.fields.utoronto.ca/talk-media/1/53/65/slides.pdf>.
This package provides a novel decision tree algorithm in the hypothesis testing framework. The algorithm examines the distribution difference between two child nodes over all possible binary partitions. The test statistic of the hypothesis testing is equivalent to the generalized energy distance, which enables the algorithm to be more powerful in detecting the complex structure, not only the mean difference. It is applicable for numeric, nominal, ordinal explanatory variables and the response in general metric space of strong negative type. The algorithm has superior performance compared to other tree models in type I error, power, prediction accuracy, and complexity.
Efficient sampling from high-dimensional truncated Gaussian distributions, or multivariate truncated normal (MTN). Techniques include zigzag Hamiltonian Monte Carlo as in Akihiko Nishimura, Zhenyu Zhang and Marc A. Suchard (2024) <doi:10.1080/01621459.2024.2395587>, and harmonic Monte Carlo in Ari Pakman and Liam Paninski (2014) <doi:10.1080/10618600.2013.788448>.
Offers methods for visualizing, modelling, and forecasting high-dimensional functional time series, also known as functional panel data. Documentation about hdftsa is provided via the paper by Cristian F. Jimenez-Varon, Ying Sun and Han Lin Shang (2024, <doi:10.1080/10618600.2024.2319166>).
This package implements marker-based estimation of heritability when observations on genetically identical replicates are available. These can be either observations on individual plants or plot-level data in a field trial. Heritability can then be estimated using a mixed model for the individual plant or plot data. For comparison, also mixed-model based estimation using genotypic means and estimation of repeatability with ANOVA are implemented. For illustration the package contains several datasets for the model species Arabidopsis thaliana.
There are growing concerns on flow data in diverse fields including trade, migration, knowledge diffusion, disease spread, and transportation. The package is an effective visual support to learn the pattern of flow which is called halfcircle diagram. The flow between two nodes placed on the center line of a circle is represented using a half circle drawn from the origin to the destination in a clockwise direction. Through changing the order of nodes, the halfcircle diagram enables users to examine the complex relationship between bidirectional flow and each potential determinants. Furthermore, the halfmeancenter function, which calculates (un) weighted mean center of half circles, makes the comparison easier.
This package provides uniform testing procedures for existence and heterogeneity of threshold effects in high-dimensional nonparametric panel regression models. The package accompanies the paper Chen, Keilbar, Su and Wang (2023) "Inference on many jumps in nonparametric panel regression models". arXiv preprint <doi:10.48550/arXiv.2312.01162>.
This package implements various heuristics like Take The Best and unit-weight linear, which do two-alternative choice: which of two objects will have a higher criterion? Also offers functions to assess performance, e.g. percent correct across all row pairs in a data set and finding row pairs where models disagree. New models can be added by implementing a fit and predict function-- see vignette. Take The Best was first described in: Gigerenzer, G. & Goldstein, D. G. (1996) <doi:10.1037/0033-295X.103.4.650>. All of these heuristics were run on many data sets and analyzed in: Gigerenzer, G., Todd, P. M., & the ABC Group (1999). <ISBN:978-0195143812>.
This package provides a dependency free interface to the H3 geospatial indexing system utilizing the Rust library h3o <https://github.com/HydroniumLabs/h3o> via the extendr library <https://github.com/extendr/extendr>.
This package provides a tool for Hierarchical Climate Regionalization applicable to any correlation-based clustering. It adds several features and a new clustering method (called, regional linkage) to hierarchical clustering in R ('hclust function in stats library): data regridding, coarsening spatial resolution, geographic masking, contiguity-constrained clustering, data filtering by mean and/or variance thresholds, data preprocessing (detrending, standardization, and PCA), faster correlation function with preliminary big data support, different clustering methods, hybrid hierarchical clustering, multivariate clustering (MVC), cluster validation, visualization of regionalization results, and exporting region map and mean timeseries into NetCDF-4 file. The technical details are described in Badr et al. (2015) <doi:10.1007/s12145-015-0221-7>.