Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
The typicality and eccentricity data analysis (TEDA) framework was put forward by Angelov (2013) <DOI:10.14313/JAMRIS_2-2014/16>. It has been further developed into multiple different techniques since, and provides a non-parametric way of determining how similar an observation, from a process that is not purely random, is to other observations generated by the process. This package provides code to use the batch and recursive TEDA methods that have been published.
This package provides a compilation of fish stock assessment methods for the analysis of length-frequency data in the context of data-poor fisheries. Includes methods and examples included in the FAO Manual by P. Sparre and S.C. Venema (1998), "Introduction to tropical fish stock assessment" (<https://openknowledge.fao.org/server/api/core/bitstreams/bc7c37b6-30df-49c0-b5b4-8367a872c97e/content>), as well as other more recent methods.
Accompanies the texts Time Series for Data Science with R by Woodward, Sadler and Robertson & Applied Time Series Analysis with R, 2nd edition by Woodward, Gray, and Elliott. It is helpful for data analysis and for time series instruction.
Sometimes you need to split your data and work on the two chunks independently before bringing them back together. Taber allows you to do that with its two functions.
An R wrapper for using TooManyCells', a command line program for clustering, visualizing, and quantifying cell clade relationships. See <https://gregoryschwartz.github.io/too-many-cells/> for more details.
Archive and manage times series data from official statistics. The timeseriesdb package was designed to manage a large catalog of time series from official statistics which are typically published on a monthly, quarterly or yearly basis. Thus timeseriesdb is optimized to handle updates caused by data revision as well as elaborate, multi-lingual meta information.
Identifies clusters of individual longitudinal trajectories. In the spirit of Leffondre et al. (2004), the procedure involves identifying each trajectory to a point in the space of measures. In this context, a measure is a quantity meant to capture a certain characteristic feature of the trajectory. The points in the space of measures are then clustered using a version of spectral clustering.
Binary ties limit the richness of network analyses as relations are unique. The two-mode structure contains a number of features lost when projection it to a one-mode network. Longitudinal datasets allow for an understanding of the causal relationship among ties, which is not the case in cross-sectional datasets as ties are dependent upon each other.
This package provides a tbl_ts class (the tsibble') for temporal data in an data- and model-oriented format. The tsibble provides tools to easily manipulate and analyse temporal data, such as filling in time gaps and aggregating over calendar periods.
Algorithms for accelerating the convergence of slow, monotone sequences from smooth, contraction mapping such as the EM and MM algorithms. It can be used to accelerate any smooth, linearly convergent acceleration scheme. A tutorial style introduction to this package is available in a vignette on the CRAN download page or, when the package is loaded in an R session, with vignette("turboEM").
An implementation of turtle graphics <http://en.wikipedia.org/wiki/Turtle_graphics>. Turtle graphics comes from Papert's language Logo and has been used to teach concepts of computer programming.
This data set provides information on the fate of passengers on the fatal maiden voyage of the ocean liner "Titanic", summarized according to economic status (class), sex, age and survival. Whereas the base R Titanic data found by calling data("Titanic") is an array resulting from cross-tabulating 2201 observations, these data sets are the individual non-aggregated observations and formatted in a machine learning context with a training sample, a testing sample, and two additional data sets that can be used for deeper machine learning analysis. These data sets are also the data sets downloaded from the Kaggle competition and thus lowers the barrier to entry for users new to R or machine learing.
Uses indicator species scores across binary partitions of a sample set to detect congruence in taxon-specific changes of abundance and occurrence frequency along an environmental gradient as evidence of an ecological community threshold. Relevant references include Baker and King (2010) <doi:10.1111/j.2041-210X.2009.00007.x>, King and Baker (2010) <doi:10.1899/09-144.1>, and Baker and King (2013) <doi:10.1899/12-142.1>.
This package provides a specialization of dplyr data manipulation verbs that parse and build expressions which are ultimately evaluated by data.table', letting it handle all optimizations. A set of additional verbs is also provided to facilitate some common operations on a subset of the data.
Unit testing is a solid component of automated CI/CD pipelines. tinytest - a lightweight, zero-dependency alternative to testthat was developed. To be able to integrate tinytests results into common CI/CD systems the test results from tinytest need to be caputred and converted to JUnit XML format. tinytest2JUnit enables this conversion while staying also lightweight and only have tinytest as its dependency.
Different estimators are provided to solve the blind source separation problem for multivariate time series with stochastic volatility and supervised dimension reduction problem for multivariate time series. Different functions based on AMUSE and SOBI are also provided for estimating the dimension of the white noise subspace. The package is fully described in Nordhausen, Matilainen, Miettinen, Virta and Taskinen (2021) <doi:10.18637/jss.v098.i15>.
This package provides functions to estimate the insertion and deletion rates of transposable element (TE) families. The estimation of insertion rate consists of an improved estimate of the age distribution that takes into account random mutations, and an adjustment by the deletion rate. A hypothesis test for a uniform insertion rate is also implemented. This package implements the methods proposed in Dai et al (2018).
Provide the core functionality to transform longitudinal data to complex-time (kime) data using analytic and numerical techniques, visualize the original time-series and reconstructed kime-surfaces, perform model based (e.g., tensor-linear regression) and model-free classification and clustering methods in the book Dinov, ID and Velev, MV. (2021) "Data Science: Time Complexity, Inferential Uncertainty, and Spacekime Analytics", De Gruyter STEM Series, ISBN 978-3-11-069780-3. <https://www.degruyter.com/view/title/576646>. The package includes 18 core functions which can be separated into three groups. 1) draw longitudinal data, such as Functional magnetic resonance imaging(fMRI) time-series, and forecast or transform the time-series data. 2) simulate real-valued time-series data, e.g., fMRI time-courses, detect the activated areas, report the corresponding p-values, and visualize the p-values in the 3D brain space. 3) Laplace transform and kimesurface reconstructions of the fMRI data.
Helps the R users to get data from Tushare Pro'<https://tushare.pro>. Tushare Pro is a platform as well as a community with a lot of staffs working in financial area. We support financial data such as stock price, financial report statements and digital coins data.
This package provides a toolset that allows you to easily import and tidy data sheets retrieved from Gapminder data web tools. It will therefore contribute to reduce the time used in data cleaning of Gapminder indicator data sheets as they are very messy.
Analyse data from longitudinal studies to characterise changes in values of semi-quantitative outcome variables within individual subjects, using high performance C++ code to enable rapid processing of large datasets. A flexible methodology is available for codifying these state transitions.
Efficient sampling of truncated multivariate (scale) mixtures of normals under linear inequality constraints is nontrivial due to the analytically intractable normalizing constant. Meanwhile, traditional methods may subject to numerical issues, especially when the dimension is high and dependence is strong. Algorithms proposed by Li and Ghosh (2015) <doi: 10.1080/15598608.2014.996690> are adopted for overcoming difficulties in simulating truncated distributions. Efficient rejection sampling for simulating truncated univariate normal distribution is included in the package, which shows superiority in terms of acceptance rate and numerical stability compared to existing methods and R packages. An efficient function for sampling from truncated multivariate normal distribution subject to convex polytope restriction regions based on Gibbs sampler for conditional truncated univariate distribution is provided. By extending the sampling method, a function for sampling truncated multivariate Student's t distribution is also developed. Moreover, the proposed method and computation remain valid for high dimensional and strong dependence scenarios. Empirical results in Li and Ghosh (2015) <doi: 10.1080/15598608.2014.996690> illustrated the superior performance in terms of various criteria (e.g. mixing and integrated auto-correlation time).
An interface to the mclust package to easily carry out latent profile analysis ("LPA"). Provides functionality to estimate commonly-specified models. Follows a tidy approach, in that output is in the form of a data frame that can subsequently be computed on. Also has functions to interface to the commercial MPlus software via the MplusAutomation package.