Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
An interface to the Fish Tree of Life API to download taxonomies, phylogenies, fossil calibrations, and diversification rate information for ray-finned fishes.
This package provides tools for training and analysing fairness-aware gated neural networks for subgroup-aware prediction and interpretation in clinical datasets. Methods draw on prior work in mixture-of-experts neural networks by Jordan and Jacobs (1994) <doi:10.1007/978-1-4471-2097-1_113>, fairness-aware learning by Hardt, Price, and Srebro (2016) <doi:10.48550/arXiv.1610.02413>, and personalised treatment prediction for depression by Iniesta, Stahl, and McGuffin (2016) <doi:10.1016/j.jpsychires.2016.03.016>.
Create a forest plot based on the layout of the data. Confidence intervals in multiple columns by groups can be done easily. Editing the plot, inserting/adding text, applying a theme to the plot, and much more.
The FastRCS algorithm of Vakili and Schmitt (2014) for robust fit of the multivariable linear regression model and outliers detection.
The fusion learning method uses a model selection algorithm to learn from multiple data sets across different experimental platforms through group penalization. The responses of interest may include a mix of discrete and continuous variables. The responses may share the same set of predictors, however, the models and parameters differ across different platforms. Integrating information from different data sets can enhance the power of model selection. Package is based on Xin Gao, Raymond J. Carroll (2017) <arXiv:1610.00667v1>.
Curry, Compose, and other higher-order functions.
Read and write PNG images with arrays, rasters, native rasters, numeric arrays, integer arrays, raw vectors and indexed values. This PNG encoder exposes configurable internal options enabling the user to select a speed-size tradeoff. For example, disabling compression can speed up writing PNG by a factor of 50. Multiple image formats are supported including raster, native rasters, and integer and numeric arrays at color depths of 1, 2, 3 or 4. 16-bit images are also supported. This implementation uses the libspng C library which is available from <https://github.com/randy408/libspng/>.
This package contains the core functions associated with Fast Regularized Canonical Correlation Analysis. Please see the following for details: Raul Cruz-Cano, Mei-Ling Ting Lee, Fast regularized canonical correlation analysis, Computational Statistics & Data Analysis, Volume 70, 2014, Pages 88-100, ISSN 0167-9473 <doi:10.1016/j.csda.2013.09.020>.
Create local, regional, and global explanations for any machine learning model with forward marginal effects. You provide a model and data, and fmeffects computes feature effects. The package is based on the theory in: C. A. Scholbeck, G. Casalicchio, C. Molnar, B. Bischl, and C. Heumann (2022) <doi:10.48550/arXiv.2201.08837>.
Function factories are functions that make functions. They can be confusing to construct. Straightforward techniques can produce functions that are fragile or hard to understand. While more robust techniques exist to construct function factories, those techniques can be confusing. This package is designed to make it easier to construct function factories.
Routines for model-based functional cluster analysis for functional data with optional covariates. The idea is to cluster functional subjects (often called functional objects) into homogenous groups by using spline smoothers (for functional data) together with scalar covariates. The spline coefficients and the covariates are modelled as a multivariate Gaussian mixture model, where the number of mixtures corresponds to the number of clusters. The parameters of the model are estimated by maximizing the observed mixture likelihood via an EM algorithm (Arnqvist and Sjöstedt de Luna, 2019) <doi:10.48550/arXiv.1904.10265>. The clustering method is used to analyze annual lake sediment from lake Kassjön (Northern Sweden) which cover more than 6400 years and can be seen as historical records of weather and climate.
Regression models for functional data, i.e., scalar-on-function, function-on-scalar and function-on-function regression models, are fitted by a component-wise gradient boosting algorithm. For a manual on how to use FDboost', see Brockhaus, Ruegamer, Greven (2017) <doi:10.18637/jss.v094.i10>.
This package provides a joint model for large-scale, competing risks time-to-event data with singular or multiple longitudinal biomarkers, implemented with the efficient algorithms developed by Li and colleagues (2022) <doi:10.1155/2022/1362913> and <doi:10.48550/arXiv.2506.12741>. The time-to-event data is modelled using a (cause-specific) Cox proportional hazards regression model with time-fixed covariates. The longitudinal biomarkers are modelled using a linear mixed effects model. The association between the longitudinal submodel and the survival submodel is captured through shared random effects. It allows researchers to analyze large-scale data to model biomarker trajectories, estimate their effects on event outcomes, and dynamically predict future events from patientsâ past histories. A function for simulating survival and longitudinal data for multiple biomarkers is also included alongside built-in datasets.
This package provides methods for performing fMRI quality assurance (QA) measurements of test objects. Heavily based on the fBIRN procedures detailed by Friedman and Glover (2006) <doi:10.1002/jmri.20583>.
This package provides a wrapper for the API of the Danish Parliament. It makes it possible to get data from the API easily into a data frame. Learn more at <http://www.ft.dk/dokumenter/aabne_data>.
Generate SPSS'/'SAS styled frequency tables. Frequency tables are generated with variable and value label attributes where applicable with optional html output to quickly examine datasets.
Connection to the Fitbit Web API <https://dev.fitbit.com/build/reference/web-api/> by including ggplot2 Visualizations, Leaflet and 3-dimensional Rayshader Maps. The 3-dimensional Rayshader Map requires the installation of the CopernicusDEM R package which includes the 30- and 90-meter elevation data.
Useful functions to translate text for multiple languages using online translators. For example, by translating error messages and descriptive analysis results into a language familiar to the user, it enables a better understanding of the information, thereby reducing the barriers caused by language. It offers several helper functions to query gene information to help interpretation of interested genes (e.g., marker genes, differential expression genes), and provides utilities to translate ggplot graphics. This package is not affiliated with any of the online translators. The developers do not take responsibility for the invoice it incurs when using this package, especially for exceeding the free quota.
Fatty acid metabolic analysis aimed to the estimation of FA import (I), de novo synthesis (S), fractional contribution of the 13C-tracers (D0, D1, D2), elongation (E) and desaturation (Des) based on mass isotopologue data.
This package provides a collection of functions to manage, to investigate and to analyze data sets of financial assets from different points of view.
This package provides a comprehensive Shiny-based graphical user interface for conducting a wide range of factor analysis procedures. FAfA (Factor Analysis for All) guides users through data uploading, assumption checking (descriptives, collinearity, multivariate normality, outliers), data wrangling (variable exclusion, data splitting), factor retention analysis (e.g., Parallel Analysis, Hull method, EGA), Exploratory Factor Analysis (EFA) with various rotation and extraction methods, Confirmatory Factor Analysis (CFA) for model testing, Reliability Analysis (e.g., Cronbach's Alpha, McDonald's Omega), Measurement Invariance testing across groups, and item weighting techniques. The application leverages established R packages such as lavaan and psych to perform these analyses, offering an accessible platform for researchers and students. Results are presented in user-friendly tables and plots, with options for downloading outputs.
"This package quantifies the provenance of sediments in a catchment or study area. Based on a characterization of the sediment sources and the end sediment mixtures, a mixing model algorithm is applied to the sediment mixtures to estimate the relative contribution of each potential source. The package includes several graphs to help users in their data understanding, such as box plots, correlation, PCA, and LDA graphs. In addition, new developments such as the Consensus Ranking (CR), Consistent Tracer Selection (CTS), and Linear Variability Propagation (LVP) methods are included to correctly apply the fingerprinting technique and increase dataset and model understanding. A new method based on Conservative Balance (CB) method has also been included to enable the use of isotopic tracers.".
The following several classes of frailty models using a penalized likelihood estimation on the hazard function but also a parametric estimation can be fit using this R package: 1) A shared frailty model (with gamma or log-normal frailty distribution) and Cox proportional hazard model. Clustered and recurrent survival times can be studied. 2) Additive frailty models for proportional hazard models with two correlated random effects (intercept random effect with random slope). 3) Nested frailty models for hierarchically clustered data (with 2 levels of clustering) by including two iid gamma random effects. 4) Joint frailty models in the context of the joint modelling for recurrent events with terminal event for clustered data or not. A joint frailty model for two semi-competing risks and clustered data is also proposed. 5) Joint general frailty models in the context of the joint modelling for recurrent events with terminal event data with two independent frailty terms. 6) Joint Nested frailty models in the context of the joint modelling for recurrent events with terminal event, for hierarchically clustered data (with two levels of clustering) by including two iid gamma random effects. 7) Multivariate joint frailty models for two types of recurrent events and a terminal event. 8) Joint models for longitudinal data and a terminal event. 9) Trivariate joint models for longitudinal data, recurrent events and a terminal event. 10) Joint frailty models for the validation of surrogate endpoints in multiple randomized clinical trials with failure-time and/or longitudinal endpoints with the possibility to use a mediation analysis model. 11) Conditional and Marginal two-part joint models for longitudinal semicontinuous data and a terminal event. 12) Joint frailty-copula models for the validation of surrogate endpoints in multiple randomized clinical trials with failure-time endpoints. 13) Generalized shared and joint frailty models for recurrent and terminal events. Proportional hazards (PH), additive hazard (AH), proportional odds (PO) and probit models are available in a fully parametric framework. For PH and AH models, it is possible to consider type-varying coefficients and flexible semiparametric hazard function. Prediction values are available (for a terminal event or for a new recurrent event). Left-truncated (not for Joint model), right-censored data, interval-censored data (only for Cox proportional hazard and shared frailty model) and strata are allowed. In each model, the random effects have the gamma or normal distribution. Now, you can also consider time-varying covariates effects in Cox, shared and joint frailty models (1-5). The package includes concordance measures for Cox proportional hazards models and for shared frailty models. 14) Competing Joint Frailty Model: A single type of recurrent event and two terminal events. 15) functions to compute power and sample size for four Gamma-frailty-based designs: Shared Frailty Models, Nested Frailty Models, Joint Frailty Models, and General Joint Frailty Models. Each design includes two primary functions: a power function, which computes power given a specified sample size; and a sample size function, which computes the required sample size to achieve a specified power. 16) Weibull Illness-Death model with or without shared frailty between transitions. Left-truncated and right-censored data are allowed. 17) Weibull Competing risks model with or without shared frailty between the transitions. Left-truncated and right-censored data are allowed. Moreover, the package can be used with its shiny application, in a local mode or by following the link below.
The FisherEM algorithm, proposed by Bouveyron & Brunet (2012) <doi:10.1007/s11222-011-9249-9>, is an efficient method for the clustering of high-dimensional data. FisherEM models and clusters the data in a discriminative and low-dimensional latent subspace. It also provides a low-dimensional representation of the clustered data. A sparse version of Fisher-EM algorithm is also provided.