Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a set of fast tools for converting a textual corpus into a set of normalized tables. Users may make use of the udpipe back end with no external dependencies, or a Python back ends with spaCy <https://spacy.io>. Exposed annotation tasks include tokenization, part of speech tagging, named entity recognition, and dependency parsing.
This package provides a big data version for fitting cumulative probability models using the orm() function from the rms package. See Liu et al. (2017) <DOI:10.1002/sim.7433> for details.
Gain access to the Spark Catalog API making use of the sparklyr API. Catalog <https://spark.apache.org/docs/2.4.3/api/java/org/apache/spark/sql/catalog/Catalog.html> is the interface for managing a metastore (aka metadata catalog) of relational entities (e.g. database(s), tables, functions, table columns and temporary views).
Cluster Evolution Analytics allows us to use exploratory what if questions in the sense that the present information of an object is plugged-in a dataset in a previous time frame so that we can explore its evolution (and of its neighbors) to the present. See the URL for the papers associated with this package, as for instance, Morales-Oñate and Morales-Oñate (2024) <doi:10.1016/j.softx.2024.101921>.
Retrieves historical versions of clinical trial registry entries from <https://ClinicalTrials.gov>. Package functionality and implementation for v 1.0.0 is documented in Carlisle (2022) <DOI:10.1371/journal.pone.0270909>.
Accelerate Bayesian analytics workflows in R through interactive modelling, visualization, and inference. Define probabilistic graphical models using directed acyclic graphs (DAGs) as a unifying language for business stakeholders, statisticians, and programmers. This package relies on interfacing with the numpyro python package.
Design and evaluate choice-based conjoint survey experiments. Generate a variety of survey designs, including random designs, frequency-based designs, and D-optimal designs, as well as "labeled" designs (also known as "alternative-specific designs"), designs with "no choice" options, and designs with dominant alternatives removed. Conveniently inspect and compare designs using a variety of metrics, including design balance, overlap, and D-error, and simulate choice data for a survey design either randomly or according to a utility model defined by user-provided prior parameters. Conduct a power analysis for a given survey design by estimating the same model on different subsets of the data to simulate different sample sizes. Bayesian D-efficient designs using the cea and modfed methods are obtained using the idefix package by Traets et al (2020) <doi:10.18637/jss.v096.i03>. Choice simulation and model estimation in power analyses are handled using the logitr package by Helveston (2023) <doi:10.18637/jss.v105.i10>.
Access Cloudstor via their WebDAV API. This package can read, write, and navigate Cloudstor from R.
This package provides a daily counts of the Coronavirus (COVID19) cases by districts and country. Data source: Epidemiological Unit, Ministry of Health, Sri Lanka <https://www.epid.gov.lk/web/>.
An implementation of Conic Multivariate Adaptive Regression Splines (CMARS) in R. See Weber et al. (2011) CMARS: a new contribution to nonparametric regression with multivariate adaptive regression splines supported by continuous optimization, <DOI:10.1080/17415977.2011.624770>. It constructs models by using the terms obtained from the forward step of MARS and then estimates parameters by using Tikhonov regularization and conic quadratic optimization. It is possible to construct models for prediction and binary classification. It provides performance measures for the model developed. The package needs the optimisation software MOSEK <https://www.mosek.com/> to construct the models. Please follow the instructions in Rmosek for the installation.
This package implements a changepoint-aware ensemble forecasting algorithm that combines Theta, TBATS (Trigonometric, Box-Cox transformation, ARMA errors, Trend, Seasonal components), and ARFIMA (AutoRegressive, Fractionally Integrated, Moving Average) using a product-of-experts approach for robust probabilistic prediction.
Collective matrix factorization (a.k.a. multi-view or multi-way factorization, Singh, Gordon, (2008) <doi:10.1145/1401890.1401969>) tries to approximate a (potentially very sparse or having many missing values) matrix X as the product of two low-dimensional matrices, optionally aided with secondary information matrices about rows and/or columns of X', which are also factorized using the same latent components. The intended usage is for recommender systems, dimensionality reduction, and missing value imputation. Implements extensions of the original model (Cortes, (2018) <arXiv:1809.00366>) and can produce different factorizations such as the weighted implicit-feedback model (Hu, Koren, Volinsky, (2008) <doi:10.1109/ICDM.2008.22>), the weighted-lambda-regularization model, (Zhou, Wilkinson, Schreiber, Pan, (2008) <doi:10.1007/978-3-540-68880-8_32>), or the enhanced model with implicit features (Rendle, Zhang, Koren, (2019) <arXiv:1905.01395>), with or without side information. Can use gradient-based procedures or alternating-least squares procedures (Koren, Bell, Volinsky, (2009) <doi:10.1109/MC.2009.263>), with either a Cholesky solver, a faster conjugate gradient solver (Takacs, Pilaszy, Tikk, (2011) <doi:10.1145/2043932.2043987>), or a non-negative coordinate descent solver (Franc, Hlavac, Navara, (2005) <doi:10.1007/11556121_50>), providing efficient methods for sparse and dense data, and mixtures thereof. Supports L1 and L2 regularization in the main models, offers alternative most-popular and content-based models, and implements functionality for cold-start recommendations and imputation of 2D data.
Computes conditional multivariate normal densities, probabilities, and random deviates.
This package creates ggplot2 Cumulative Residual (CURE) plots to check the goodness-of-fit of a count model; or the tables to create a customized version. A dataset of crashes in Washington state is available for illustrative purposes.
According to the codes and names of county-level and above administrative divisions released in 2022 by the Ministry of Civil Affairs of the People's Republic of China, the online vector map files were retrieved from the website (available at: <http://datav.aliyun.com/portal/school/atlas/area_selector>). This study was supported by the National Natural Science Foundation of China (NSFC, Grant No. 42205177).
Routines doing cone projection and quadratic programming, as well as doing estimation and inference for constrained parametric regression and shape-restricted regression problems. See Mary C. Meyer (2013)<doi:10.1080/03610918.2012.659820> for more details.
Computes 138 standard climate indices at monthly, seasonal and annual resolution. These indices were selected, based on their direct and significant impacts on target sectors, after a thorough review of the literature in the field of extreme weather events and natural hazards. Overall, the selected indices characterize different aspects of the frequency, intensity and duration of extreme events, and are derived from a broad set of climatic variables, including surface air temperature, precipitation, relative humidity, wind speed, cloudiness, solar radiation, and snow cover. The 138 indices have been classified as follow: Temperature based indices (42), Precipitation based indices (22), Bioclimatic indices (21), Wind-based indices (5), Aridity/ continentality indices (10), Snow-based indices (13), Cloud/radiation based indices (6), Drought indices (8), Fire indices (5), Tourism indices (5).
Classification of climate according to Koeppen - Geiger, of aridity indices, of continentality indices, of water balance after Thornthwaite, of viticultural bioclimatic indices. Drawing climographs: Thornthwaite, Peguy, Bagnouls-Gaussen.
Estimation of gas transport properties (viscosity, diffusion, thermal conductivity) using Chapman-Enskok theory (Chapman and Larmor 1918, <doi:10.1098/rsta.1918.0005>) and of the second virial coefficient (Vargas et al. 2001, <doi:10.1016/s0378-4371(00)00362-9>) using the Lennard-Jones (12-6) potential. Up to the third order correction is taken into account for viscosity and thermal conductivity. It is also possible to calculate the binary diffusion coefficients of polar and non-polar gases in non-polar bath gases (Brown et al. 2011, <doi:10.1016/j.pecs.2010.12.001>). 16 collision integrals are calculated with four digit accuracy over the reduced temperature range [0.3, 400] using an interpolation function of Kim and Monroe (2014, <doi:10.1016/j.jcp.2014.05.018>).
Emulation of an application originally created by Paul Pukite. Computer Aided Rate Modeling and Simulation. Jan Pukite and Paul Pukite, (1998, ISBN 978-0-7803-3482), William J. Stewart, (1994, ISBN: 0-691-03699-3).
This package provides a set of functions that helps you to generate descriptive statistics based on the variable types.
Manipulate and view coronavirus data and other societally relevant data at a basic level.
The ConNEcT approach investigates the pairwise association strength of binary time series by calculating contingency measures and depicts the results in a network. The package includes features to explore and visualize the data. To calculate the pairwise concurrent or temporal sequenced relationship between the variables, the package provides seven contingency measures (proportion of agreement, classical & corrected Jaccard, Cohen's kappa, phi correlation coefficient, odds ratio, and log odds ratio), however, others can easily be implemented. The package also includes non-parametric significance tests, that can be applied to test whether the contingency value quantifying the relationship between the variables is significantly higher than chance level. Most importantly this test accounts for auto-dependence and relative frequency.See Bodner et al.(2021) <doi: 10.1111/bmsp.12222>.Finally, a network can be drawn. Variables depicted the nodes of the network, with the node size adapted to the prevalence. The association strength between the variables defines the undirected (concurrent) or directed (temporal sequenced) links between the nodes. The results of the non-parametric significance test can be included by depicting either all links or only the significant ones. Tutorial see Bodner et al.(2021) <doi:10.3758/s13428-021-01760-w>.
This package provides estimation procedures for copula-based stochastic frontier models for cross-sectional data. The package implements maximum likelihood estimation of stochastic frontier models allowing flexible dependence structures between inefficiency and noise terms through various copula families (e.g., Gaussian and Student-t). It enables estimation of technical efficiency scores, log-likelihood values, and information criteria (AIC and BIC). The implemented framework builds upon stochastic frontier analysis introduced by Aigner, Lovell and Schmidt (1977) <doi:10.1016/0304-4076(77)90052-5> and the copula theory described in Joe (2014, ISBN:9781466583221). Empirical applications of copula-based stochastic frontier models can be found in Wiboonpongse et al. (2015) <doi:10.1016/j.ijar.2015.06.001> and Maneejuk et al. (2017, ISBN:9783319562176).