Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Several Oceanographic data sets are provided for use by the oce package and for other purposes.
This package provides a regression framework for response variables which are continuous self-rating scales such as the Visual Analog Scale (VAS) used in pain assessment, or the Linear Analog Self-Assessment (LASA) scales in quality of life studies. These scales measure subjects perception of an intangible quantity, and cannot be handled as ratio variables because of their inherent non-linearity. We treat them as ordinal variables, measured on a continuous scale. A function (the g function) connects the scale with an underlying continuous latent variable. The link function is the inverse of the CDF of the assumed underlying distribution of the latent variable. A variety of link functions are currently implemented. Such models are described in Manuguerra et al (2020) <doi:10.18637/jss.v096.i08>.
This package provides functions to do O2PLS-DA analysis for multiple omics data integration. The algorithm came from "O2-PLS, a two-block (X±Y) latent variable regression (LVR) method with an integral OSC filter" which published by Johan Trygg and Svante Wold at 2003 <doi:10.1002/cem.775>. O2PLS is a bidirectional multivariate regression method that aims to separate the covariance between two data sets (it was recently extended to multiple data sets) (Löfstedt and Trygg, 2011 <doi:10.1002/cem.1388>; Löfstedt et al., 2012 <doi:10.1016/j.aca.2013.06.026>) from the systematic sources of variance being specific for each data set separately.
This package implements orbit counting using a fast combinatorial approach. Counts orbits of nodes and edges from edge matrix or data frame, or a graph object from the graph package.
Solves linear systems of form Ax=b via Gauss elimination, LU decomposition, Gauss-Seidel, Conjugate Gradient Method (CGM) and Cholesky methods.
This package implements the orthogonal reparameterization approach recommended by Lancaster (2002) to estimate dynamic panel models with fixed effects (and optionally: panel specific intercepts). The approach uses a likelihood-based estimator and produces estimates that are asymptotically unbiased as N goes to infinity, with a T as low as 2.
Tetra-allele cross often referred as four-way cross or double cross or four-line cross are those type of mating designs in which every cross is obtained by mating amongst four inbred lines. A tetra-allele cross can be obtained by crossing the resultant of two unrelated diallel crosses. A common triallel cross involving four inbred lines A, B, C and D can be symbolically represented as (A X B) X (C X D) or (A, B, C, D) or (A B C D) etc. Tetra-allele cross can be broadly categorized as Complete Tetra-allele Cross (CTaC) and Partial Tetra-allele Crosses (PTaC). Rawlings and Cockerham (1962)<doi:10.2307/2527461> firstly introduced and gave the method of analysis for tetra-allele cross hybrids using the analysis method of single cross hybrids under the assumption of no linkage. The set of all possible four-way mating between several genotypes (individuals, clones, homozygous lines, etc.) leads to a CTaC. If there are N number of inbred lines involved in a CTaC, the the total number of crosses, T = N*(N-1)*(N-2)*(N-3)/8. When more number of lines are to be considered, the total number of crosses in CTaC also increases. Thus, it is almost impossible for the investigator to carry out the experimentation with limited available resource material. This situation lies in taking a fraction of CTaC with certain underlying properties, known as PTaC.
Conversion between the most common odds types for sports betting. Hong Kong odds, US odds, Decimal odds, Indonesian odds, Malaysian odds, and raw Probability are covered in this package.
An unofficial wrapper for okx exchange v5 API <https://www.okx.com/docs-v5/en/>, including REST API and WebSocket API.
Download and import of OpenStreetMap ('OSM') data as sf or sp objects. OSM data are extracted from the Overpass web server (<https://overpass-api.de/>) and processed with very fast C++ routines for return to R'.
Perform interactive occupation coding during interviews as described in Peycheva, D., Sakshaug, J., Calderwood, L. (2021) <doi:10.2478/jos-2021-0042> and Schierholz, M., Gensicke, M., Tschersich, N., Kreuter, F. (2018) <doi:10.1111/rssa.12297>. Generate suggestions for occupational categories based on free text input, with pre-trained machine learning models in German and a ready-to-use shiny application provided for quick and easy data collection.
Data integration Web application for biobanks by OBiBa'. Opal is the core database application for biobanks. Participant data, once collected from any data source, must be integrated and stored in a central data repository under a uniform model. Opal is such a central repository. It can import, process, validate, query, analyze, report, and export data. Opal is typically used in a research center to analyze the data acquired at assessment centres. Its ultimate purpose is to achieve seamless data-sharing among biobanks. This Opal client allows to interact with Opal web services and to perform operations on the R server side. DataSHIELD administration tools are also provided.
An interface between R and the OSRM API. OSRM is a routing service based on OpenStreetMap data. See <http://project-osrm.org/> for more information. This package enables the computation of routes, trips, isochrones and travel distances matrices (travel time and kilometric distance).
This package provides functions to estimate the optimal threshold of diagnostic markers or treatment selection markers. The optimal threshold is the marker value that maximizes the utility of the marker based-strategy (for diagnostic or treatment selection) in a given population. The utility function depends on the type of marker (diagnostic or treatment selection), but always takes into account the preferences of the patients or the physician in the decision process. For estimating the optimal threshold, ones must specify the distributions of the marker in different groups (defined according to the type of marker, diagnostic or treatment selection) and provides data to estimate the parameters of these distributions. Ones must also provide some features of the target populations (disease prevalence or treatment efficacies) as well as the preferences of patients or physicians. The functions rely on Bayesian inference which helps producing several indicators derived from the optimal threshold. See Blangero, Y, Rabilloud, M, Ecochard, R, and Subtil, F (2019) <doi:10.1177/0962280218821394> for the original article that describes the estimation method for treatment selection markers and Subtil, F, and Rabilloud, M (2019) <doi:10.1002/bimj.200900242> for diagnostic markers.
Likelihood based optimal partitioning and indicator species analysis. Finding the best binary partition for each species based on model selection, with the possibility to take into account modifying/confounding variables as described in Kemencei et al. (2014) <doi:10.1556/ComEc.15.2014.2.6>. The package implements binary and multi-level response models, various measures of uncertainty, Lorenz-curve based thresholding, with native support for parallel computations.
The Open University Learning Analytics Dataset (OULAD) is available from Kuzilek et al. (2017) <doi:10.1038/sdata.2017.171>. The ouladFormat package loads, cleans and formats the OULAD for data analysis (each row of the returned data set is an individual student). The packageâ s main function, combined_dataset(), allows the user to choose whether the returned data set includes assessment, demographics, virtual learning environment (VLE), or registration variables etc.
Package for estimating the parameters of a nonlinear function using iterated linearization via Taylor series. Method is based on KubÃ¡Ä ek (2000) ISBN: 80-244-0093-6. The algorithm is a generalization of the procedure given in Köning, R., Wimmer, G. and Witkovský, V. (2014) <doi:10.1088/0957-0233/25/11/115001>.
Objects and methods to handle and solve the min-sum location problem, also known as Fermat-Weber problem. The min-sum location problem search for a point such that the weighted sum of the distances to the demand points are minimized. See "The Fermat-Weber location problem revisited" by Brimberg, Mathematical Programming, 1, pg. 71-76, 1995. <DOI:10.1007/BF01592245>. General global optimization algorithms are used to solve the problem, along with the adhoc Weiszfeld method, see "Sur le point pour lequel la Somme des distances de n points donnes est minimum", by Weiszfeld, Tohoku Mathematical Journal, First Series, 43, pg. 355-386, 1937 or "On the point for which the sum of the distances to n given points is minimum", by E. Weiszfeld and F. Plastria, Annals of Operations Research, 167, pg. 7-41, 2009. <DOI:10.1007/s10479-008-0352-z>.
Visualise results obtained from analysing data mapped to the Observational Medical Outcomes Partnership (OMOP) common data model using shiny applications.
Programs for detecting and cleaning outliers in single time series and in time series from homogeneous and heterogeneous databases using an Orthogonal Greedy Algorithm (OGA) for saturated linear regression models. The programs implement the procedures presented in the paper entitled "Efficient Outlier Detection for Large Time Series Databases" by Pedro Galeano, Daniel Peña and Ruey S. Tsay (2026), working paper, Universidad Carlos III de Madrid. Version 1.1.2 fixes one bug.
This package provides a toolbox for working with public opinion data from Argentina. It facilitates access to microdata and the calculation of indicators of the Trust in Government Index (ICG), prepared by the Torcuato Di Tella University. Although we will try to document everything possible in English, by its very nature Spanish will be the main language. El paquete fue pensado como una caja de herramientas para el trabajo con datos de opinión pública de Argentina. El mismo facilita el acceso a los microdatos y el cálculos de indicadores del à ndice de Confianza en el Gobierno (ICG), elaborado por la Universidad Torcuato Di Tella.
Automated reporting in Word and PowerPoint can require customization for each organizational template. This package works around this by adding standard reporting functions and an abstraction layer to facilitate automated reporting workflows that can be replicated across different organizational templates.
The identity provider ['OneLogin']<http://onelogin.com> is used for authentication via Single Sign On (SSO). This package provides an R interface to their API.
Users can build a single shiny app for exploring population characterization, population-level causal effect estimation, and patient-level prediction results generated via the R analyses packages in HADES (see <https://ohdsi.github.io/Hades/>). Learn more about OhdsiShinyAppBuilder at <https://ohdsi.github.io/OhdsiShinyAppBuilder/>.