Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Which uses Twitter APIs for the necessary data in sentiment analysis, acts as a middleware with the approved Twitter Application. A special access key is given to users who subscribe to the application with their Twitter account. With this special access key, the user defined keyword for sentiment analysis can be searched in twitter recent searches and results can be obtained( more information <https://github.com/hakkisabah/tsentiment> ). In addition, a service named tsentiment-services has been developed to provide all these operations ( for more information <https://github.com/hakkisabah/tsentiment-services> ). After the successful results obtained and in line with the permissions given by the user, the results of the analysis of the word cloud and bar graph saved in the user folder directory can be seen. In each analysis performed, the previous analysis visual result is deleted and this is the basic information you need to know as a practice rule. tsentiment package provides a free service that acts as a middleware for easy data extraction from Twitter, and in return, the user rate limit is reduced by 30 requests from the total limit and the remaining requests are used. These 30 requests are reserved for use in application analytics. For information about endpoints, you can refer to the limit information in the "GET search/tweets" row in the Endpoints column in the list at <https://developer.twitter.com/en/docs/twitter-api/v1/rate-limits>.
This package provides the "r, q, p, and d" distribution functions for the triangle distribution. Also includes maximum likelihood estimation of parameters.
This package provides a suite of descriptive and inferential methods designed to evaluate one or more biomarkers for their ability to guide patient treatment recommendations. Package includes functions to assess the calibration of risk models; and plot, evaluate, and compare markers. Please see the reference Janes H, Brown MD, Huang Y, et al. (2014) <doi:10.1515/ijb-2012-0052> for further details.
Use SQL SELECT statements to query R data frames.
This package provides a simple approach for constructing dynamic materials modeling suggested by Prasad and Gegel (1984) <doi:10.1007/BF02664902>. It can easily generate various processing-maps based on this model as well. The calculation result in this package contains full materials constants, information about power dissipation efficiency factor, and rheological properties, can be exported completely also, through which further analysis and customized plots will be applicable as well.
An extension to the R tidy data environment for automated machine learning. The package allows fitting and cross validation of linear regression and classification algorithms on grouped data.
This package provides a teal_data class as a unified data model for teal applications focusing on reproducibility and relational data.
This package provides a dataset of predefined color palettes based on the Star Trek science fiction series, associated color palette functions, and additional functions for generating customized palettes that are on theme. The package also offers functions for applying the palettes to plots made using the ggplot2 package.
Download summary files from Census Bureau <https://www2.census.gov/> and extract data, in particular high resolution data at block, block group, and tract level, from decennial census and American Community Survey 1-year and 5-year estimates.
This package provides an integrated user interface and workflow for the analysis of running, cycling and swimming data from GPS-enabled tracking devices through the trackeR <https://CRAN.R-project.org/package=trackeR> R package.
Trust region algorithm for nonlinear optimization. Efficient when the Hessian of the objective function is sparse (i.e., relatively few nonzero cross-partial derivatives). See Braun, M. (2014) <doi:10.18637/jss.v060.i04>.
Compute age-adjusted rates by direct and indirect methods and other epidemiological indicators in a tidy way, wrapping functions from the epitools package.
This package provides a mathematical optimization procedure in combination with statistical bootstrap for the estimation of the latent signals (sometimes called scores) informing the global consensus ranking (often named aggregation ranking). To solve mid/large-scale problems, users should install the gurobi optimiser (available from <https://www.gurobi.com/>).
Extract trends from monthly and quarterly economic time series. Provides two main functions: augment_trends() for pipe-friendly tibble workflows and extract_trends() for direct time series analysis. Includes key econometric filters and modern parameter experimentation tools.
Time Series Qn is a package with applications of the Qn estimator of Rousseeuw and Croux (1993) <doi:10.1080/01621459.1993.10476408> to univariate and multivariate Time Series in time and frequency domains. More specifically, the robust estimation of autocorrelation or autocovariance matrix functions from Ma and Genton (2000, 2001) <doi:10.1111/1467-9892.00203>, <doi:10.1006/jmva.2000.1942> and Cotta (2017) <doi:10.13140/RG.2.2.14092.10883> are provided. The robust pseudo-periodogram of Molinares et. al. (2009) <doi:10.1016/j.jspi.2008.12.014> is also given. This packages also provides the M-estimator of the long-memory parameter d based on the robustification of the GPH estimator proposed by Reisen et al. (2017) <doi:10.1016/j.jspi.2017.02.008>.
Fundamental time series forecasting models such as autoregressive integrated moving average (ARIMA), exponential smoothing, and simple moving average are included. For ARIMA models, the output follows the traditional parameterisation by Box and Jenkins (1970, ISBN: 0816210942, 9780816210947). Furthermore, there are functions for detailed time series exploration and decomposition, respectively. All data and result visualisations are generated by ggplot2 instead of conventional R graphical output. For more details regarding the theoretical background of the models see Hyndman, R.J. and Athanasopoulos, G. (2021) <https://otexts.com/fpp3/>.
Here we provide tools for the computation and factorization of high-dimensional tensor products that are formed by smaller matrices. The methods are based on properties of Kronecker products (Searle 1982, p. 265, ISBN-10: 0470009616). We evaluated this methodology by benchmark testing and illustrated its use in Gaussian Linear Models ('Lopez-Cruz et al., 2024') <doi:10.1093/g3journal/jkae001>.
Fit a threshold regression model based on the first-hitting-time of a boundary by the sample path of a Wiener diffusion process. The threshold regression methodology is well suited to applications involving survival and time-to-event data.
This package implements an Entropy measure of dependence based on the Bhattacharya-Hellinger-Matusita distance. Can be used as a (nonlinear) autocorrelation/crosscorrelation function for continuous and categorical time series. The package includes tests for serial and cross dependence and nonlinearity based on it. Some routines have a parallel version that can be used in a multicore/cluster environment. The package makes use of S4 classes.
Several functions to allow comparisons of data across different geographies, in particular for Canadian census data from different censuses.
Performing the hypothesis tests for the two sample problem based on order statistics and power comparisons. Calculate the test statistic, density, distribution function, quantile function, random number generation and others.
To facilitate the analysis of positron emission tomography (PET) time activity curve (TAC) data, and to encourage open science and replicability, this package supports data loading and analysis of multiple TAC file formats. Functions are available to analyze loaded TAC data for individual participants or in batches. Major functionality includes weighted TAC merging by region of interest (ROI), calculating models including standardized uptake value ratio (SUVR) and distribution volume ratio (DVR, Logan et al. 1996 <doi:10.1097/00004647-199609000-00008>), basic plotting functions and calculation of cut-off values (Aizenstein et al. 2008 <doi:10.1001/archneur.65.11.1509>). Please see the walkthrough vignette for a detailed overview of tacmagic functions.
Calculates the number of true positives and false positives from a dataset formatted for Jackknife alternative free-response receiver operating characteristic which is used for statistical analysis which is explained in the book Chakraborty DP (2017), "Observer Performance Methods for Diagnostic Imaging - Foundations, Modeling, and Applications with R-Based Examples", Taylor-Francis <https://www.crcpress.com/9781482214840>.
Trelliscope is a scalable, flexible, interactive approach to visualizing data (Hafen, 2013 <doi:10.1109/LDAV.2013.6675164>). This package provides methods that make it easy to create a Trelliscope display specification for TrelliscopeJS. High-level functions are provided for creating displays from within tidyverse or ggplot2 workflows. Low-level functions are also provided for creating new interfaces.