Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Estimation of counterfactual outcomes for multiple values of continuous interventions at different time points, and plotting of causal dose-response curves. Details are given in Schomaker, McIlleron, Denti, Diaz (2024) <doi:10.48550/arXiv.2305.06645>.
Browser cookies are name-value pairs that are saved in a user's browser by a website. Cookies allow websites to persist information about the user and their use of the website. Here we provide tools for working with cookies in shiny apps, in part by wrapping the js-cookie JavaScript library <https://github.com/js-cookie/js-cookie>.
Modeling the correlation transitions under specified distributional assumptions within the realm of discretization in the context of the latency and threshold concepts. The details of the method are explained in Demirtas, H. and Vardar-Acar, C. (2017) <DOI:10.1007/978-981-10-3307-0_4>.
Mines contiguous sequential patterns in text.
This package contains the prepared data that is needed for the shiny application examples in the canvasXpress package. This package also includes datasets used for automated testthat tests. Scotto L, Narayan G, Nandula SV, Arias-Pulido H et al. (2008) <doi:10.1002/gcc.20577>. Davis S, Meltzer PS (2007) <doi:10.1093/bioinformatics/btm254>.
Reading and writing of files in the most commonly used formats of structural crystallography. It includes functions to work with a variety of statistics used in this field and functions to perform basic crystallographic computing. References: D. G. Waterman, J. Foadi, G. Evans (2011) <doi:10.1107/S0108767311084303>.
This package provides conversion functionality between a broad range of scientific, historical, and industrial unit types.
The CoTiMA package performs meta-analyses of correlation matrices of repeatedly measured variables taken from studies that used different time intervals. Different time intervals between measurement occasions impose problems for meta-analyses because the effects (e.g. cross-lagged effects) cannot be simply aggregated, for example, by means of common fixed or random effects analysis. However, continuous time math, which is applied in CoTiMA', can be used to extrapolate or intrapolate the results from all studies to any desired time lag. By this, effects obtained in studies that used different time intervals can be meta-analyzed. CoTiMA fits models to empirical data using the structural equation model (SEM) package ctsem', the effects specified in a SEM are related to parameters that are not directly included in the model (i.e., continuous time parameters; together, they represent the continuous time structural equation model, CTSEM). Statistical model comparisons and significance tests are then performed on the continuous time parameter estimates. CoTiMA also allows analysis of publication bias (Egger's test, PET-PEESE estimates, zcurve analysis etc.) and analysis of statistical power (post hoc power, required sample sizes). See Dormann, C., Guthier, C., & Cortina, J. M. (2019) <doi:10.1177/1094428119847277>. and Guthier, C., Dormann, C., & Voelkle, M. C. (2020) <doi:10.1037/bul0000304>.
Enumerate orientation-consistent directed networks from an undirected or partially directed skeleton, detect feedback loops, summarize topology, and simulate node dynamics via stochastic differential equations.
Test for cluster tendency (clusterability) of a data set. The methods implemented - reducing the data set to a single dimension using principal component analysis or computing pairwise distances, and performing a multimodality test like the Dip Test or Silverman's Critical Bandwidth Test - are described in Adolfsson, Ackerman, and Brownstein (2019) <doi:10.1016/j.patcog.2018.10.026>. Such methods can inform whether clustering algorithms are appropriate for a data set.
This package provides a Shiny application to conduct comprehensive analysis of latent means including the examination of group equivalency, propensity score analysis, measurement invariance analysis, and assessment of latent mean differences of equivalent groups with invariant data. Group equivalency and propensity score analyses are implemented using the MatchIt package [Ho et al. (2011) <doi:10.18637/jss.v042.i08>], ensuring robust control for covariates. Structural equation modeling and invariance testing rely heavily on the lavaan package [Rosseel (2012) <doi:10.18637/jss.v048.i02>], providing a flexible and powerful modeling framework. The application also integrates modified functions from Hammack-Brown et al. (2021) <doi:10.1002/hrdq.21452> to support factor ratio testing and the list-and-delete procedure.
Download and read data on United States congressional proceedings. Data is read from the Library of Congress's Congress.gov Application Programming Interface (<https://github.com/LibraryOfCongress/api.congress.gov/>). Functions exist for all version 3 endpoints, including for bills, amendments, congresses, summaries, members, reports, communications, nominations, and treaties.
Non-parametric tests (Wilcoxon rank sum test and Wilcoxon signed rank test) for clustered data documented in Jiang et. al (2020) <doi:10.18637/jss.v096.i06>.
This package provides functions to construct finite-sample calibrated predictive intervals for Bayesian models, following the approach in Barber et al. (2021) <doi:10.1214/20-AOS1965>. These intervals are calculated efficiently using importance sampling for the leave-one-out residuals. By default, the intervals will also reflect the relative uncertainty in the Bayesian model, using the locally-weighted conformal methods of Lei et al. (2018) <doi:10.1080/01621459.2017.1307116>.
It is designed to streamline the process of calculating complete annual growth rates with user-friendly functions and robust algorithms. It enables researchers and analysts to effortlessly generate precise growth rate estimates for their data. For method details see, Sharma, M.K.(2013) <https://www.indianjournals.com/ijor.aspx?target=ijor:jfl&volume=26&issue=1and2&article=018>. It offers a comprehensive suite of functions and customisable parameters. Equipped to handle varying complexities in data structures. It empowers users to uncover insightful growth dynamics and make informed decisions.
This package provides functions for estimating and reporting multi-year averages and corresponding confidence intervals and distributions. A potential use case is reporting the chemical and ecological status of surface waters according to the European Water Framework Directive.
An R implementation of the algorithms described in Reingold and Dershowitz (4th ed., Cambridge University Press, 2018) <doi:10.1017/9781107415058>, allowing conversion between many different calendar systems. Cultural and religious holidays from several calendars can be calculated.
Sample and cell filtering as well as visualisation of output metrics from Cell Ranger by Grace X.Y. Zheng et al. (2017) <doi:10.1038/ncomms14049>. CRMetrics allows for easy plotting of output metrics across multiple samples as well as comparative plots including statistical assessments of these. CRMetrics allows for easy removal of ambient RNA using SoupX by Matthew D Young and Sam Behjati (2020) <doi:10.1093/gigascience/giaa151> or CellBender by Stephen J Fleming et al. (2022) <doi:10.1101/791699>. Furthermore, it is possible to preprocess data using Pagoda2 by Nikolas Barkas et al. (2021) <https://github.com/kharchenkolab/pagoda2> or Seurat by Yuhan Hao et al. (2021) <doi:10.1016/j.cell.2021.04.048> followed by embedding of cells using Conos by Nikolas Barkas et al. (2019) <doi:10.1038/s41592-019-0466-z>. Finally, doublets can be detected using scrublet by Samuel L. Wolock et al. (2019) <doi:10.1016/j.cels.2018.11.005> or DoubletDetection by Gayoso et al. (2020) <doi:10.5281/zenodo.2678041>. In the end, cells are filtered based on user input for use in downstream applications.
This package provides a collection of functions dedicated to simulating staggered entry platform trials whereby the treatment under investigation is a combination of two active compounds. In order to obtain approval for this combination therapy, superiority of the combination over the two active compounds and superiority of the two active compounds over placebo need to be demonstrated. A more detailed description of the design can be found in Meyer et al. <DOI:10.1002/pst.2194> and a manual in Meyer et al. <arXiv:2202.02182>.
This package provides a collection of functions that make it easier to understand crime (or other) data, and assist others in understanding it. The package helps you read data from various sources, clean it, fix column names, and graph the data.
This package provides authentication for Shiny applications using Amazon Cognito ( <https://aws.amazon.com/es/cognito/>).
ClickHouse (<https://clickhouse.com/>) is an open-source, high performance columnar OLAP (online analytical processing of queries) database management system for real-time analytics using SQL. This DBI backend relies on the ClickHouse HTTP interface and support HTTPS protocol.
Comprehensive data analysis software, and the name "cg" stands for "compare groups." Its genesis and evolution are driven by common needs to compare administrations, conditions, etc. in medicine research and development. The current version provides comparisons of unpaired samples, i.e. a linear model with one factor of at least two levels. It also provides comparisons of two paired samples. Good data graphs, modern statistical methods, and useful displays of results are emphasized.
This package implements the adaptive designs for integrated phase I/II trials of drug combinations via continual reassessment method (CRM) to evaluate toxicity and efficacy simultaneously for each enrolled patient cohort based on Bayesian inference. It supports patients assignment guidance in a single trial using current enrolled data, as well as conducting extensive simulation studies to evaluate operating characteristics before the trial starts. It includes various link functions such as empiric, one-parameter logistic, two-parameter logistic, and hyperbolic tangent, as well as considering multiple prior distributions of the parameters like normal distribution, gamma distribution and exponential distribution to accommodate diverse clinical scenarios. Method using Bayesian framework with empiric link function is described in: Wages and Conaway (2014) <doi:10.1002/sim.6097>.