Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides some functions to get Korean text sample from news articles in Naver which is popular news portal service <https://news.naver.com/> in Korea.
Next-Generation Clustered Heat Maps (NG-CHMs) allow for dynamic exploration of heat map data in a web browser. NGCHM allows users to create both stand-alone HTML files containing a Next-Generation Clustered Heat Map, and .ngchm files to view in the NG-CHM viewer. See Ryan MC, Stucky M, et al (2020) <doi:10.12688/f1000research.20590.2> for more details.
Computes the probability density function, the cumulative density function, quantile function, random numbers and measures of inference for the following families exponentiated generalized gull alpha power family, exponentiated gull alpha powerfamily, gull alpha power family.
This allows you to generate reporting workflows around nlmixr2 analyses with outputs in Word and PowerPoint. You can specify figures, tables and report structure in a user-definable YAML file. Also you can use the internal functions to access the figures and tables to allow their including in other outputs (e.g. R Markdown).
Novel responsive tools for developing R based Shiny dashboards and applications. The scripts and style sheets are based on jQuery <https://jquery.com/> and Bootstrap <https://getbootstrap.com/>.
Linear regression model and generalized linear models with nonparametric network effects on network-linked observations. The model is originally proposed by Le and Li (2022) <doi:10.48550/arXiv.2007.00803> and is assumed on observations that are connected by a network or similar relational data structure. A more recent work by Wang, Le and Li (2024) <doi:10.48550/arXiv.2410.01163> further extends the framework to generalized linear models. All these models are implemented in the current package. The model does not assume that the relational data or network structure to be precisely observed; thus, the method is provably robust to a certain level of perturbation of the network structure. The package contains the estimation and inference function for the model.
The ntfy (pronounce: notify) service is a simple HTTP-based pub-sub notification service. It allows you to send notifications to your phone or desktop via scripts from any computer, entirely without signup, cost or setup. It's also open source if you want to run your own. Visit <https://ntfy.sh> for more details.
Six growth models are fitted using non-linear least squares. These are the Richards, the 3, 4 and 5 parameter logistic, the Gompetz and the Weibull growth models. Reference: Reddy T., Shkedy Z., van Rensburg C. J., Mwambi H., Debba P., Zuma K. and Manda, S. (2021). "Short-term real-time prediction of total number of reported COVID-19 cases and deaths in South Africa: a data driven approach". BMC medical research methodology, 21(1), 1-11. <doi:10.1186/s12874-020-01165-x>.
Estimation of relatively complex nonlinear mixed-effects models, including the Sigmoidal Mixed Model and the Piecewise Linear Mixed Model with abrupt or smooth transition, through a single intuitive line of code and with automated generation of starting values.
Datasets of driving offences and fines in New Zealand between 2009 and 2017. Originally published by the New Zealand Police at <http://www.police.govt.nz/about-us/publication/road-policing-driver-offence-data-january-2009-december-2017>.
Features tools for the network data analysis and community detection. Provides multiple methods for fitting, model selection and goodness-of-fit testing in degree-corrected stochastic blocks models. Most of the computations are fast and scalable for sparse networks, esp. for Poisson versions of the models. Implements the following: Amini, Chen, Bickel and Levina (2013) <doi:10.1214/13-AOS1138> Bickel and Sarkar (2015) <doi:10.1111/rssb.12117> Lei (2016) <doi:10.1214/15-AOS1370> Wang and Bickel (2017) <doi:10.1214/16-AOS1457> Zhang and Amini (2020) <arXiv:2012.15047> Le and Levina (2022) <doi:10.1214/21-EJS1971>.
An implementation of the Naive Bayes Classifier (NBC) algorithm used for Verbal Autopsy (VA) built on code from Miasnikof et al (2015) <DOI:10.1186/s12916-015-0521-2>.
Get or set UNIX priority (niceness) of running R process.
This package performs Bayesian wavelet analysis using individual non-local priors as described in Sanyal & Ferreira (2017) <DOI:10.1007/s13571-016-0129-3> and non-local prior mixtures as described in Sanyal (2025) <DOI:10.48550/arXiv.2501.18134>.
Segmentation of short text sequences - like hashtags - into the separated words sequence, done with the use of dictionary, which may be built on custom corpus of texts. Unigram dictionary is used to find most probable sequence, and n-grams approach is used to determine possible segmentation given the text corpus.
Design and analysis of flexible platform trials with non-concurrent controls. Functions for data generation, analysis, visualization and running simulation studies are provided. The implemented analysis methods are described in: Bofill Roig et al. (2022) <doi:10.1186/s12874-022-01683-w>, Saville et al. (2022) <doi:10.1177/17407745221112013> and Schmidli et al. (2014) <doi:10.1111/biom.12242>.
This package provides routines for plotting linkage and association results along a chromosome, with marker names displayed along the top border. There are also routines for generating BED and BedGraph custom tracks for viewing in the UCSC genome browser. The data reformatting program Mega2 uses this package to plot output from a variety of programs.
Non-parametric dimensionality reduction function. Reduction with and without feature selection. Plot functions. Automated feature selections. Kosztyan et. al. (2024) <doi:10.1016/j.eswa.2023.121779>.
Three distinct methods are implemented for evaluating the sums of arbitrary negative binomial distributions. These methods are: Furman's exact probability mass function (Furman (2007) <doi:10.1016/j.spl.2006.06.007>), saddlepoint approximation, and a method of moments approximation. Functions are provided to calculate the density function, the distribution function and the quantile function of the convolutions in question given said evaluation methods. Functions for generating random deviates from negative binomial convolutions and for directly calculating the mean, variance, skewness, and excess kurtosis of said convolutions are also provided.
Factorize binary matrices into rank-k components using the logistic function in the updating process. See e.g. Tomé et al (2015) <doi:10.1007/s11045-013-0240-9> .
This package provides a set of functions to access National Football League play-by-play data from <https://www.nfl.com/>.
Assist novice developers when preparing a single package or a set of integrated packages to submit to CRAN. Provide additional resources to facilitate the automation of the following individual or batch processing: check local source packages; build local .tar.gz source files; install packages from local .tar.gz files; detect conflicts between function names in the environment. The additional resources include determining the identity and ordering of the packages to process when updating an imported package.
This package provides a collection of network analytic (convenience) functions which are missing in other standard packages. This includes triad census with attributes <doi:10.1016/j.socnet.2019.04.003>, core-periphery models <doi:10.1016/S0378-8733(99)00019-2>, and several graph generators. Most functions are build upon igraph'.
Generate pseudonymous animal names that are delightful and easy to remember like the Likable Leech and the Proud Chickadee. A unique pseudonym can be created for every unique element in a vector or row in a data frame. Pseudonyms can be customized and tracked over time, so that the same input is always assigned the same pseudonym.