Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This comprehensive framework for periodic time series modeling is designated as "CLIC" (The LIC for Distributed Cosine Regression Analysis) analysis. It is predicated on the assumption that the underlying data exhibits complex periodic structures beyond simple harmonic components. The philosophy of the method is articulated in Guo G. (2020) <doi:10.1080/02664763.2022.2053949>.
Compare two classifications or clustering solutions that may or may not have the same number of classes, and that might have hard or soft (fuzzy, probabilistic) membership. Calculate various metrics to assess how the clusters compare to each other. The calculations are simple, but provide a handy tool for users unfamiliar with matrix multiplication. This package is not geared towards traditional accuracy assessment for classification/ mapping applications - the motivating use case is for comparing a probabilistic clustering solution to a set of reference or existing class labels that could have any number of classes (that is, without having to degrade the probabilistic clustering to hard classes).
Collective matrix factorization (a.k.a. multi-view or multi-way factorization, Singh, Gordon, (2008) <doi:10.1145/1401890.1401969>) tries to approximate a (potentially very sparse or having many missing values) matrix X as the product of two low-dimensional matrices, optionally aided with secondary information matrices about rows and/or columns of X', which are also factorized using the same latent components. The intended usage is for recommender systems, dimensionality reduction, and missing value imputation. Implements extensions of the original model (Cortes, (2018) <arXiv:1809.00366>) and can produce different factorizations such as the weighted implicit-feedback model (Hu, Koren, Volinsky, (2008) <doi:10.1109/ICDM.2008.22>), the weighted-lambda-regularization model, (Zhou, Wilkinson, Schreiber, Pan, (2008) <doi:10.1007/978-3-540-68880-8_32>), or the enhanced model with implicit features (Rendle, Zhang, Koren, (2019) <arXiv:1905.01395>), with or without side information. Can use gradient-based procedures or alternating-least squares procedures (Koren, Bell, Volinsky, (2009) <doi:10.1109/MC.2009.263>), with either a Cholesky solver, a faster conjugate gradient solver (Takacs, Pilaszy, Tikk, (2011) <doi:10.1145/2043932.2043987>), or a non-negative coordinate descent solver (Franc, Hlavac, Navara, (2005) <doi:10.1007/11556121_50>), providing efficient methods for sparse and dense data, and mixtures thereof. Supports L1 and L2 regularization in the main models, offers alternative most-popular and content-based models, and implements functionality for cold-start recommendations and imputation of 2D data.
Datasets for the book entitled "Modelling Survival Data in Medical Research" by Collett (2023) <doi:10.1201/9781003282525>. The datasets provide extensive examples of time-to-event data.
Implementations of threshold regression approaches for linear regression models with a covariate subject to random censoring, including deletion threshold regression and completion threshold regression. Reverse survival regression, which flip the role of response variable and the covariate, is also considered.
Functions, data and code for Hilbe, J.M. 2011. Negative Binomial Regression, 2nd Edition (Cambridge University Press) and Hilbe, J.M. 2014. Modeling Count Data (Cambridge University Press).
This package provides tools for downloading, reading and analyzing the COVID19 National Household Sample Survey - PNAD COVID19, a household survey from Brazilian Institute of Geography and Statistics - IBGE. The data must be downloaded from the official website <https://www.ibge.gov.br/>. Further analysis must be made using package survey'.
Implementations of recent complex-valued wavelet spectral procedures for analysis of irregularly sampled signals, see Hamilton et al (2018) <doi:10.1080/00401706.2017.1281846>.
Detects a variety of coordinated actions on social media and outputs the network of coordinated users along with related information.
Confirms if the number is Luhn compliant. Can check if credit card, IMEI number or any other Luhn based number is correct. For more info see: <https://en.wikipedia.org/wiki/Luhn_algorithm>.
Gives convenient access to publicly available police-recorded open crime data from large cities in the United States that are included in the Crime Open Database <https://osf.io/zyaqn/>.
This package provides tools for fitting, assessing, and comparing logistic and conditional logistic regression models. Includes residual diagnostics and goodness of fit measures for model development and evaluation in matched case control studies.
CIFTI files contain brain imaging data in "grayordinates," which represent the gray matter as cortical surface vertices (left and right) and subcortical voxels (cerebellum, basal ganglia, and other deep gray matter). ciftiTools provides a unified environment for reading, writing, visualizing and manipulating CIFTI-format data. It supports the "dscalar," "dlabel," and "dtseries" intents. Grayordinate data is read in as a "xifti" object, which is structured for convenient access to the data and metadata, and includes support for surface geometry files to enable spatially-dependent functionality such as static or interactive visualizations and smoothing.
Cellular cooperation compromises the plating efficiency-based analysis of clonogenic survival data. This tool provides functions that enable a robust analysis of colony formation assay (CFA) data in presence or absence of cellular cooperation. The implemented method has been described in Brix et al. (2020). (Brix, N., Samaga, D., Hennel, R. et al. "The clonogenic assay: robustness of plating efficiency-based analysis is strongly compromised by cellular cooperation." Radiat Oncol 15, 248 (2020). <doi:10.1186/s13014-020-01697-y>) Power regression for parameter estimation, calculation of survival fractions, uncertainty analysis and plotting functions are provided.
Perform variable selection for Cox regression model with interval-censored data. Can deal with both low-dimensional and high-dimensional data. Case-cohort design can be incorporated. Two sets of covariates scenario can also be considered. The references are listed in the URL below.
Markov chain Monte Carlo based inference routines for collapsed latent position cluster models or social networks, which includes searches over the model space (number of clusters in the latent position cluster model). The label switching algorithm used is that of Nobile and Fearnside (2007) <doi:10.1007/s11222-006-9014-7> which relies on the algorithm of Carpaneto and Toth (1980) <doi:10.1145/355873.355883>.
Allows to generate colors from palettes defined in the colormap module of Node.js'. (see <https://github.com/bpostlethwaite/colormap> for more information). In total it provides 44 distinct palettes made from sequential and/or diverging colors. In addition to the pre defined palettes you can also specify your own set of colors. There are also scale functions that can be used with ggplot2'.
This function conducts the Cochran-Armitage trend test to a 2 by k contingency table. It will report the test statistic (Z) and p-value.A linear trend in the frequencies will be calculated, because the weights (0,1,2) will be used by default.
An R interface to Cheetah Grid', a high-performance JavaScript table widget. cheetahR allows users to render millions of rows in just a few milliseconds, making it an excellent alternative to other R table widgets. The package wraps the Cheetah Grid JavaScript functions and makes them readily available for R users. The underlying grid implementation is based on Cheetah Grid <https://github.com/future-architect/cheetah-grid>.
This package provides data science tools for conservation science, including methods for environmental data analysis, humidity calculations, sustainability metrics, engineering calculations, and data visualisation. Supports conservators, scientists, and engineers working with cultural heritage preventive conservation data. The package is motivated by the framework outlined in Cosaert and Beltran et al. (2022) "Tools for the Analysis of Collection Environments" <https://www.getty.edu/conservation/publications_resources/pdf_publications/tools_for_the_analysis_of_collection_environments.html>.
This package provides a shortcut procedure is proposed to implement closed testing for large-scale multiple testings, especially with the global test. This shortcut is asymptotically equivalent to closed testing and post hoc. Users could detect any possible sets of features or pathways with family-wise error rate controlled. The global test is powerful to detect associations between a group of features and an outcome of interest.
Map functions while capturing results, errors, warnings, messages and other output tidily, then filter and summarise data frames or lists on the basis of those side effects.
Access chemical, hazard, bioactivity, and exposure data from the Computational Toxicology and Exposure ('CTX') APIs <https://api-ccte.epa.gov/docs/>. ccdR was developed to streamline the process of accessing the information available through the CTX APIs without requiring prior knowledge of how to use APIs. Most data is also available on the CompTox Chemical Dashboard ('CCD') <https://comptox.epa.gov/dashboard/> and other resources found at the EPA Computational Toxicology and Exposure Online Resources <https://www.epa.gov/comptox-tools>.
This package provides a header only, C++ interface to R with enhancements over cpp11'. Enforces copy-on-write semantics consistent with R behavior. Offers native support for ALTREP objects, UTF-8 string handling, modern C++11 features and idioms, and reduced memory requirements. Allows for vendoring, making it useful for restricted environments. Compared to cpp11', it adds support for converting C++ maps to R lists, Roxygen documentation directly in C++ code, proper handling of matrix attributes, support for nullable external pointers, bidirectional copy of complex number types, flexibility in type conversions, use of nullable pointers, and various performance optimizations.