Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a method of clustering functional data using subregion information of the curves. It is intended to supplement the fda and fda.usc packages in functional data object clustering. It also facilitates the printing and plotting of the results in a tree format and limits the partitioning candidates into a specific set of subregions.
Analyzing regression data with many and/or highly collinear predictor variables, by simultaneously reducing the predictor variables to a limited number of components and regressing the criterion variables on these components (de Jong S. & Kiers H. A. L. (1992) <doi:10.1016/0169-7439(92)80100-I>). Several rotation and model selection options are provided.
This package provides a number of functions to simplify and automate the scoring, comparison, and evaluation of different ways of creating composites of data. It is particularly aimed at facilitating the creation of physiological composites of metabolic syndrome symptom score (MetSSS) and allostatic load (AL). Provides a wrapper to calculate the MetSSS on new data using the Healthy Hearts formula.
This package provides functions for graph-based multiple-sample testing and visualization of microbiome data, in particular data stored in phyloseq objects. The tests are based on those described in Friedman and Rafsky (1979) <http://www.jstor.org/stable/2958919>, and the tests are described in more detail in Callahan et al. (2016) <doi:10.12688/f1000research.8986.1>.
This package performs Bayesian arm-based network meta-analysis for datasets with binary, continuous, and count outcomes (Zhang et al., 2014 <doi:10.1177/1740774513498322>; Lin et al., 2017 <doi:10.18637/jss.v080.i05>).
Calculate sample size or power for hierarchical endpoints. The package can handle any type of outcomes (binary, continuous, count, ordinal, time-to-event) and any number of such endpoints. It allows users to calculate sample size with a given power or to calculate power with a given sample size for hypothesis testing based on win ratios, win odds, net benefit, or DOOR (desirability of outcome ranking) as treatment effect between two groups for hierarchical endpoints. The methods of this package are described further in the paper by Barnhart, H. X. et al. (2024, <doi:10.1080/19466315.2024.2365629>).
This package provides a set of concise and efficient tools for statistical production. Can also be used for data management. In statistical production, you deal with complex data and need to control your process at each step of your work. Concise functions are very helpful, because you do not hesitate to use them. The following functions are included in the package. dup checks duplicates. miss checks missing values. tac computes contingency table of all columns. toc compares two tables, spotting significant deviations. chi2_find compares columns within a data.frame, spotting related categories of (a more complex function).
Aims at detecting single nucleotide variation (SNV) and insertion/deletion (INDEL) in circulating tumor DNA (ctDNA), used as a surrogate marker for tumor, at each base position of an Next Generation Sequencing (NGS) analysis. Mutations are assessed by comparing the minor-allele frequency at each position to the measured PER in control samples.
Check compliance of event-data from (business) processes with respect to specified rules. Rules supported are of three types: frequency (activities that should (not) happen x number of times), order (succession between activities) and exclusiveness (and and exclusive choice between activities).
An implementation of prediction intervals for overdispersed count data, for overdispersed binomial data and for linear random effects models.
Automated pain scoring from paw withdrawal tracking data. Based on Jones et al. (2020) "A machine-vision approach for automated pain measurement at millisecond timescales" <doi:10.7554/eLife.57258>.
Hybrid control design is a way to borrow information from external controls to augment concurrent controls in a randomized controlled trial and is expected to overcome the feasibility issue when adequate randomized controlled trials cannot be conducted. A major challenge in the hybrid control design is its inability to eliminate a prior-data conflict caused by systematic imbalances in measured or unmeasured confounding factors between patients in the concurrent treatment/control group and external controls. To prevent the prior-data conflict, a combined use of propensity score matching and Bayesian commensurate prior has been proposed in the context of hybrid control design. The propensity score matching is first performed to guarantee the balance in baseline characteristics, and then the Bayesian commensurate prior is constructed while discounting the information based on the similarity in outcomes between the concurrent and external controls. psBayesborrow is a package to implement the propensity score matching and the Bayesian analysis with commensurate prior, as well as to conduct a simulation study to assess operating characteristics of the hybrid control design, where users can choose design parameters in flexible and straightforward ways depending on their own application.
Producing the time-dependent receiver operating characteristic (ROC) curve through parametric approaches. Tools for generating random data, fitting, predicting and check goodness of fit are prepared. The methods are developed from the theoretical framework of proportional hazard model and copula functions. Using this package, users can now simulate parametric time-dependent ROC and run experiment to understand the behavior of the curve under different scenario.
This package provides a ggplot2 front end to plot summary statistics on danish provinces, regions, municipalities, and zipcodes. The needed geoms of each of the four levels are inherent in the package, thus making these types of plots easy for the user. This is essentially an updated port of the previously available mapDK package by Sebastian Barfort.
Calibrate p-values under a robust perspective using the methods developed by Sellke, Bayarri, and Berger (2001) <doi:10.1198/000313001300339950> and obtain measures of the evidence provided by the data in favor of point null hypotheses which are safer and more straightforward to interpret.
Implementation of propensity clustering and decomposition as described in Ranola et al. (2013) <doi:10.1186/1752-0509-7-21>. Propensity decomposition can be viewed on the one hand as a generalization of the eigenvector-based approximation of correlation networks, and on the other hand as a generalization of random multigraph models and conformity-based decompositions.
Generates design matrix for analysing real paired comparisons and derived paired comparison data (Likert type items/ratings or rankings) using a loglinear approach. Fits loglinear Bradley-Terry model (LLBT) exploiting an eliminate feature. Computes pattern models for paired comparisons, rankings, and ratings. Some treatment of missing values (MCAR and MNAR). Fits latent class (mixture) models for paired comparison, rating and ranking patterns using a non-parametric ML approach.
Statistical power analysis for designs including t-tests, correlations, multiple regression, ANOVA, mediation, and logistic regression. Functions accompany Aberson (2019) <doi:10.4324/9781315171500>.
Programmatic interface to the PhenoCam web services (<https://phenocam.nau.edu/webcam>). Allows for easy downloading of PhenoCam data directly to your R workspace or your computer and provides post-processing routines for consistent and easy timeseries outlier detection, smoothing and estimation of phenological transition dates. Methods for this package are described in detail in Hufkens et. al (2018) <doi:10.1111/2041-210X.12970>.
This package provides functions to prepare rankings data and fit the Plackett-Luce model jointly attributed to Plackett (1975) <doi:10.2307/2346567> and Luce (1959, ISBN:0486441369). The standard Plackett-Luce model is generalized to accommodate ties of any order in the ranking. Partial rankings, in which only a subset of items are ranked in each ranking, are also accommodated in the implementation. Disconnected/weakly connected networks implied by the rankings may be handled by adding pseudo-rankings with a hypothetical item. Optionally, a multivariate normal prior may be set on the log-worth parameters and ranker reliabilities may be incorporated as proposed by Raman and Joachims (2014) <doi:10.1145/2623330.2623654>. Maximum a posteriori estimation is used when priors are set. Methods are provided to estimate standard errors or quasi-standard errors for inference as well as to fit Plackett-Luce trees. See the package website or vignette for further details.
Computes optimal changepoint models using the Poisson likelihood for non-negative count data, subject to the PeakSeg constraint: the first change must be up, second change down, third change up, etc. For more info about the models and algorithms, read "Constrained Dynamic Programming and Supervised Penalty Learning Algorithms for Peak Detection" <https://jmlr.org/papers/v21/18-843.html> by TD Hocking et al.
This wrapper houses PathLit API endpoints for R. The usage of these endpoints require the use of an API key which can be obtained at <https://www.pathlit.io/docs/cli/>.
This package creates a non-negative low-rank approximate factorization of a sparse counts matrix by maximizing Poisson likelihood with L1/L2 regularization (e.g. for implicit-feedback recommender systems or bag-of-words-based topic modeling) (Cortes, (2018) <arXiv:1811.01908>), which usually leads to very sparse user and item factors (over 90% zero-valued). Similar to hierarchical Poisson factorization (HPF), but follows an optimization-based approach with regularization instead of a hierarchical prior, and is fit through gradient-based methods instead of variational inference.
Processing Chlorophyll Fluorescence & P700 Absorbance data generated by WALZ hardware. Four models are provided for the regression of Pi curves, which can be compared with each other in order to select the most suitable model for the data set. Control plots ensure the successful verification of each regression. Bundled output of alpha, ETRmax, Ik etc. enables fast and reliable further processing of the data.