Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Dynamic stochastic block model that combines a stochastic block model (SBM) for its static part with independent Markov chains for the evolution of the nodes groups through time, developed in Matias and Miele (2016) <doi:10.1111/rssb.12200>.
You can load a schema from a DTR (data type registry) as an R object. Use this schema to write your data in JSON-LD (JavaScript Object Notation for Linked Data) format to make it machine readable.
An RStudio addin for teaching and learning data manipulation using the dplyr package. You can learn each steps of data manipulation by clicking your mouse without coding. You can get resultant data (as a tibble') and the code for data manipulation.
Comparison of the accuracy of two binary diagnostic tests in a "paired" study design, i.e. when each test is applied to each subject in the study.
This package provides a set of functions for securely storing API tokens and interacting with the <https://diariodeobras.net> system. Includes convenient wrappers around the httr2 package to perform authenticated requests, retrieve project details, tasks, reports, and more.
Transfer any date type to ISO standard. Package recognizes dates in given data frame and transform to ISO format. Only one date format can be applied within one data frame column.
For an observational study with binary treatment, binary outcome and K strata, implements a d-statistic that uses those strata most insensitive to unmeasured bias in treatment assignment.<doi:10.1093/biomet/asaa032> The package has one function, dstat2x2xk.
This package provides a distance density clustering (DDC) algorithm in R. DDC uses dynamic time warping (DTW) to compute a similarity matrix, based on which cluster centers and cluster assignments are found. DDC inherits dynamic time warping (DTW) arguments and constraints. The cluster centers are centroid points that are calculated using the DTW Barycenter Averaging (DBA) algorithm. The clustering process is divisive. At each iteration, cluster centers are updated and data is reassigned to cluster centers. Early stopping is possible. The output includes cluster centers and clustering assignment, as described in the paper (Ma et al (2017) <doi:10.1109/ICDMW.2017.11>).
This package creates interactive genome browser. It joins the data analysis power of R and the visualization libraries of JavaScript in one package. Barrios, D. & Prieto, C. (2017) <doi:10.1089/cmb.2016.0213>.
This package provides a wrapper for the DeepL API <https://developers.deepl.com/docs>, a web service for translating texts between different languages. A DeepL API developer account is required to use the service (see <https://www.deepl.com/pro#developer>).
Likelihood-based inference methods with doubly-truncated data are developed under various models. Nonparametric models are based on Efron and Petrosian (1999) <doi:10.1080/01621459.1999.10474187> and Emura, Konno, and Michimae (2015) <doi:10.1007/s10985-014-9297-5>. Parametric models from the special exponential family (SEF) are based on Hu and Emura (2015) <doi:10.1007/s00180-015-0564-z> and Emura, Hu and Konno (2017) <doi:10.1007/s00362-015-0730-y>. The parametric location-scale models are based on Dorre et al. (2021) <doi:10.1007/s00180-020-01027-6>.
Helper functions for descriptive tasks such as making print-friendly bivariate tables, sample size flow counts, and visualizing sample distributions. Also contains R approximations of some common SAS and Stata functions such as PROC MEANS from SAS and ladder', gladder', and pwcorr from Stata'.
This package creates full factorial experimental designs and designs based on orthogonal arrays for (industrial) experiments. Provides diverse quality criteria. Provides utility functions for the class design, which is also used by other packages for designed experiments.
R codes for distance based cell lineage reconstruction. Our methods won both sub-challenges 2 and 3 of the Allen Institute Cell Lineage Reconstruction DREAM Challenge in 2020. References: Gong et al. (2021) <doi:10.1016/j.cels.2021.05.008>, Gong et al. (2022) <doi:10.1186/s12859-022-04633-x>.
Download and import time series from <http://www.dataseries.org>, a comprehensive and up-to-date collection of open data from Switzerland.
This package provides tools for working with multiple related tables, stored as data frames or in a relational database. Multiple tables (data and metadata) are stored in a compound object, which can then be manipulated with a pipe-friendly syntax.
This package implements an efficient algorithm for solving sparse-penalized support vector machines with kernel density convolution. This package is designed for high-dimensional classification tasks, supporting lasso (L1) and elastic-net penalties for sparse feature selection and providing options for tuning kernel bandwidth and penalty weights. The dcsvm is applicable to fields such as bioinformatics, image analysis, and text classification, where high-dimensional data commonly arise. Learn more about the methodology and algorithm at Wang, Zhou, Gu, and Zou (2023) <doi:10.1109/TIT.2022.3222767>.
Basic time series functionalities such as listing of missing values, application of arbitrary aggregation as well as rolling (asymmetric) window functions and automatic detection of periodicity. As it is mainly based on data.table', it is fast and (in combination with the R6 package) offers reference semantics. In addition to its native R6 interface, it provides an S3 interface for those who prefer the latter. Finally yet importantly, its functional approach allows for incorporating functionalities from many other packages.
This package provides a Scannerless GLR parser/parser generator. Note that GLR standing for "generalized LR", where L stands for "left-to-right" and R stands for "rightmost (derivation)". For more information see <https://en.wikipedia.org/wiki/GLR_parser>. This parser is based on the Tomita (1987) algorithm. (Paper can be found at <https://aclanthology.org/P84-1073.pdf>). The original dparser package documentation can be found at <https://dparser.sourceforge.net/>. This allows you to add mini-languages to R (like rxode2's ODE mini-language Wang, Hallow, and James 2015 <DOI:10.1002/psp4.12052>) or to parse other languages like NONMEM to automatically translate them to R code. To use this in your code, add a LinkingTo dparser in your DESCRIPTION file and instead of using #include <dparse.h> use #include <dparser.h>. This also provides a R-based port of the make_dparser <https://dparser.sourceforge.net/d/make_dparser.cat> command called mkdparser(). Additionally you can parse an arbitrary grammar within R using the dparse() function, which works on most OSes and is mainly for grammar testing. The fastest parsing, of course, occurs at the C level, and is suggested.
Solves quadratic programming problems using Richard L. Dykstra's cyclic projection algorithm. Routine allows for a combination of equality and inequality constraints. See Dykstra (1983) <doi:10.1080/01621459.1983.10477029> for details.
Detect abrupt changes in time series with local fluctuations as a random walk process and autocorrelated noise as an AR(1) process. See Romano, G., Rigaill, G., Runge, V., Fearnhead, P. (2021) <doi:10.1080/01621459.2021.1909598>.
This package provides a flexible container to transport and manipulate complex sets of data. These data may consist of multiple data files and associated meta data and ancillary files. Individual data objects have associated system level meta data, and data files are linked together using the OAI-ORE standard resource map which describes the relationships between the files. The OAI- ORE standard is described at <https://www.openarchives.org/ore/>. Data packages can be serialized and transported as structured files that have been created following the BagIt specification. The BagIt specification is described at <https://datatracker.ietf.org/doc/html/draft-kunze-bagit-08>.
Tool collection for common and not so common data science use cases. This includes custom made algorithms for data management as well as value calculations that are hard to find elsewhere because of their specificity but would be a waste to get lost nonetheless. Currently available functionality: find sub-graphs in an edge list data.frame, find mode or modes in a vector of values, extract (a) specific regular expression group(s), generate ISO time stamps that play well with file names, or generate URL parameter lists by expanding value combinations.
Simple feature stores and tools for creating personalised feature stores. diseasystore powers feature stores which can automatically link and aggregate features to a given stratification level. These feature stores are automatically time-versioned (powered by the SCDB package) and allows you to easily and dynamically compute features as part of your continuous integration.