Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides functions for analyzing citizens bicycle usage pattern and predicting rental amount on specific conditions. Functions on this package interacts with data on tashudata package, a drat repository. tashudata package contains rental/return history on public bicycle system('Tashu'), weather for 3 years and bicycle station information. To install this data package, see the instructions at <https://github.com/zeee1/Tashu_Rpackage>. top10_stations(), top10_paths() function visualizes image showing the most used top 10 stations and paths. daily_bike_rental() and monthly_bike_rental() shows daily, monthly amount of bicycle rental. create_train_dataset(), create_test_dataset() is data processing function for prediction. Bicycle rental history from 2013 to 2014 is used to create training dataset and that on 2015 is for test dataset. Users can make random-forest prediction model by using create_train_model() and predict amount of bicycle rental in 2015 by using predict_bike_rental().
Fitting tree-structured varying coefficient models (Berger et al. (2019), <doi:10.1007/s11222-018-9804-8>). Simultaneous detection of covariates with varying coefficients and effect modifiers that induce varying coefficients if they are present.
Implement text and sentiment analysis with texter'. Generate sentiment scores on text data and also visualize sentiments. texter allows you to quickly generate insights on your data. It includes support for lexicons such as NRC and Bing'.
Accurately estimates phase shifts by accounting for period changes and for the point in the circadian cycle at which the stimulus occurs. See Tackenberg et al. (2018) <doi:10.1177/0748730418768116>.
Tensor Composition Analysis (TCA) allows the deconvolution of two-dimensional data (features by observations) coming from a mixture of heterogeneous sources into a three-dimensional matrix of signals (features by observations by sources). The TCA framework further allows to test the features in the data for different statistical relations with an outcome of interest while modeling source-specific effects; particularly, it allows to look for statistical relations between source-specific signals and an outcome. For example, TCA can deconvolve bulk tissue-level DNA methylation data (methylation sites by individuals) into a three-dimensional tensor of cell-type-specific methylation levels for each individual (i.e. methylation sites by individuals by cell types) and it allows to detect cell-type-specific statistical relations (associations) with phenotypes. For more details see Rahmani et al. (2019) <DOI:10.1038/s41467-019-11052-9>.
Calculates empirical TL-moments (trimmed L-moments) of arbitrary order and trimming, and converts them to distribution parameters.
This package provides a tm Source to create corpora from articles exported from the LexisNexis content provider as HTML files. It is able to read both text content and meta-data information (including source, date, title, author and pages). Note that the file format is highly unstable: there is no warranty that this package will work for your corpus, and you may have to adjust the code to adapt it to your particular format.
Utilizing the logger framework to record events within a package, specific to teal family of packages. Supports logging namespaces, hierarchical logging, various log destinations, vectorization, and more.
The data that is generated from independent and consecutive GillespieSSA runs for a generic biochemical network is formatted as rows and constitutes an observation. The first column of each row is the computed timestep for each run. Subsequent columns are used for the number of molecules of each participating molecular species or "metabolite" of a generic biochemical network. In this way TemporalGSSA', is a wrapper for the R-package GillespieSSA'. The number of observations must be at least 30. This will generate data that is statistically significant. TemporalGSSA', transforms this raw data into a simulation time-dependent and metabolite-specific trial. Each such trial is defined as a set of linear models (n >= 30) between a timestep and number of molecules for a metabolite. Each linear model is characterized by coefficients such as the slope, arbitrary constant, etc. The user must enter an integer from 1-4. These specify the statistical modality utilized to compute a representative timestep (mean, median, random, all). These arguments are mandatory and will be checked. Whilst, the numeric indicator "0" indicates suitability, "1" prompts the user to revise and re-enter their data. An optional logical argument controls the output to the console with the default being "TRUE" (curtailed) whilst "FALSE" (verbose). The coefficients of each linear model are averaged (mean slope, mean constant) and are incorporated into a metabolite-specific linear regression model as the dependent variable. The independent variable is the representative timestep chosen previously. The generated data is the imputed molecule number for an in silico experiment with (n >=30) observations. These steps can be replicated with multiple set of observations. The generated "technical replicates" can be statistically evaluated (mean, standard deviation) and will constitute simulation time-dependent molecules for each metabolite. For SSA-generated datasets with varying simulation times TemporalGSSA will generate a simulation time-dependent trajectory for each metabolite of the biochemical network under study. The relevant publication with the mathematical derivation of the algorithm is (2022, Journal of Bioinformatics and Computational Biology) <doi:10.1142/S0219720022500184>. The algorithm has been deployed in the following publications (2021, Heliyon) <doi:10.1016/j.heliyon.2021.e07466> and (2016, Journal of Theoretical Biology) <doi:10.1016/j.jtbi.2016.07.002>.
An interface to twitter-text', a JavaScript library which is responsible for determining the length/validity of a tweet and identifying/linking any URLs or special tags (e.g. mentions or hashtags) which may be present.
Helper functions for creating, editing, and testing tutorials created with the learnr package. Provides a simple method for allowing students to download their answers to tutorial questions. For examples of its use, see the r4ds.tutorials package.
Time Series Qn is a package with applications of the Qn estimator of Rousseeuw and Croux (1993) <doi:10.1080/01621459.1993.10476408> to univariate and multivariate Time Series in time and frequency domains. More specifically, the robust estimation of autocorrelation or autocovariance matrix functions from Ma and Genton (2000, 2001) <doi:10.1111/1467-9892.00203>, <doi:10.1006/jmva.2000.1942> and Cotta (2017) <doi:10.13140/RG.2.2.14092.10883> are provided. The robust pseudo-periodogram of Molinares et. al. (2009) <doi:10.1016/j.jspi.2008.12.014> is also given. This packages also provides the M-estimator of the long-memory parameter d based on the robustification of the GPH estimator proposed by Reisen et al. (2017) <doi:10.1016/j.jspi.2017.02.008>.
The main function of the package aims to update lmer()'/'glmer() models depending on their warnings, so trying to avoid convergence and singularity problems.
Information on all of the TriMet stops in the Portland Metro Area. It includes information such as the longitude, latitude, cross street, and direction of the stop. TriMet has catalogued these stops, 6880 in total.
Topological data analysis studies structure and shape of the data using topological features. We provide a variety of algorithms to learn with persistent homology of the data based on functional summaries for clustering, hypothesis testing, visualization, and others. We refer to Wasserman (2018) <doi:10.1146/annurev-statistics-031017-100045> for a statistical perspective on the topic.
It provides generic methods that are used by more than one package, avoiding conflicts. This package will be imported by tidySingleCellExperiment and tidyseurat'.
This package provides methods for extracting various features from time series data. The features provided are those from Hyndman, Wang and Laptev (2013) <doi:10.1109/ICDMW.2015.104>, Kang, Hyndman and Smith-Miles (2017) <doi:10.1016/j.ijforecast.2016.09.004> and from Fulcher, Little and Jones (2013) <doi:10.1098/rsif.2013.0048>. Features include spectral entropy, autocorrelations, measures of the strength of seasonality and trend, and so on. Users can also define their own feature functions.
This package provides a connector to the What3Words (http://what3words.com/) service, which represents each 3m by 3m square on earth with a unique trio of English-language words.
This is a small package to provide consistent tick marks for plotting ggplot2 figures. It provides breaks and labels for ggplot2 without requiring ggplot2 to be installed.
Data handling and estimation functions for animal movement estimation from archival or satellite tags. Helper functions are included for making image summaries binned by time interval from Markov Chain Monte Carlo simulations.
An extension of ExPosition for two table analyses, specifically, discriminant analyses.
An aid for text mining in R, with a syntax that should be familiar to experienced R users. Provides a wrapper for several topic models that take similarly-formatted input and give similarly-formatted output. Has additional functionality for analyzing and diagnostics for topic models.
The two-parameter Xgamma and Poisson Xgamma distributions are analyzed, covering standard distribution and regression functions, maximum likelihood estimation, quantile functions, probability density and mass functions, cumulative distribution functions, and random number generation. References include: "Sen, S., Chandra, N. and Maiti, S. S. (2018). On properties and applications of a two-parameter XGamma distribution. Journal of Statistical Theory and Applications, 17(4): 674--685. <doi:10.2991/jsta.2018.17.4.9>." "Wani, M. A., Ahmad, P. B., Para, B. A. and Elah, N. (2023). A new regression model for count data with applications to health care data. International Journal of Data Science and Analytics. <doi:10.1007/s41060-023-00453-1>.".
Sometimes you need to split your data and work on the two chunks independently before bringing them back together. Taber allows you to do that with its two functions.