Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a collection of novel tools for generating species distribution and abundance models (SDM) that are dynamic through both space and time. These highly flexible functions incorporate spatial and temporal aspects across key SDM stages; including when cleaning and filtering species occurrence data, generating pseudo-absence records, assessing and correcting sampling biases and autocorrelation, extracting explanatory variables and projecting distribution patterns. Throughout, functions utilise Google Earth Engine and Google Drive to minimise the computing power and storage demands associated with species distribution modelling at high spatio-temporal resolution.
Automatic generation of finite state machine models of dynamic decision-making that both have strong predictive power and are interpretable in human terms. We use an efficient model representation and a genetic algorithm-based estimation process to generate simple deterministic approximations that explain most of the structure of complex stochastic processes. We have applied the software to empirical data, and demonstrated it's ability to recover known data-generating processes by simulating data with agent-based models and correctly deriving the underlying decision models for multiple agent models and degrees of stochasticity.
This package provides a two-stage procedure for the denoising and clustering of stack of noisy images acquired over time. Clustering only assumes that the data contain an unknown but small number of dynamic features. The method first denoises the signals using local spatial and full temporal information. The clustering step uses the previous output to aggregate voxels based on the knowledge of their spatial neighborhood. Both steps use a single keytool based on the statistical comparison of the difference of two signals with the null signal. No assumption is therefore required on the shape of the signals. The data are assumed to be normally distributed (or at least follow a symmetric distribution) with a known constant variance. Working pixelwise, the method can be time-consuming depending on the size of the data-array but harnesses the power of multicore cpus.
The goal of dndR is to provide a suite of Dungeons & Dragons related functions. This package is meant to be useful both to players and Dungeon Masters (DMs). Some functions apply to many tabletop role-playing games (e.g., dice rolling), but others are focused on Fifth Edition (a.k.a. "5e") and where possible both the 2014 and 2024 versions are supported.
This package provides functions to compute coefficients measuring the dependence of two or more than two variables. The functions can be deployed to gain information about functional dependencies of the variables with emphasis on monotone functions. The statistics describe how well one response variable can be approximated by a monotone function of other variables. In regression analysis the variable selection is an important issue. In this framework the functions could be useful tools in modeling the regression function. Detailed explanations on the subject can be found in papers Liebscher (2014) <doi:10.2478/demo-2014-0004>; Liebscher (2017) <doi:10.1515/demo-2017-0012>; Liebscher (2019, submitted).
Use numerical optimization to fit ordinary differential equations (ODEs) to time series data to examine the dynamic relationships between variables or the characteristics of a dynamical system. It can now be used to estimate the parameters of ODEs up to second order, and can also apply to multilevel systems. See <https://github.com/yueqinhu/defit> for details.
Computes a new measure, DNSL betweenness, via the creation of a new graph from an existing one, duplicating nodes with self-loops. This betweenness centrality does not drop this essential information. Implements Merelo & Molinari (2024) <doi:10.1007/s42001-023-00245-4>.
Pulls together a collection of datasets from Miguel de Carvalho research articles. Including, for example: - de Carvalho (2012) <doi:10.1016/j.jspi.2011.08.016>; - de Carvalho et al (2012) <doi:10.1080/03610926.2012.709905>; - de Carvalho et al (2012) <doi:10.1016/j.econlet.2011.09.007>); - de Carvalho and Davison (2014) <doi:10.1080/01621459.2013.872651>; - de Carvalho and Rua (2017) <doi:10.1016/j.ijforecast.2015.09.004>; - de Carvalho et al (2023) <doi:10.1002/sta4.560>; - de Carvalho et al (2022) <doi:10.1007/s13253-021-00469-9>; - Palacios et al (2024) <doi:10.1214/24-BA1420>.
Deep compositional spatial models are standard spatial covariance models coupled with an injective warping function of the spatial domain. The warping function is constructed through a composition of multiple elemental injective functions in a deep-learning framework. The package implements two cases for the univariate setting; first, when these warping functions are known up to some weights that need to be estimated, and, second, when the weights in each layer are random. In the multivariate setting only the former case is available. Estimation and inference is done using `tensorflow`, which makes use of graphics processing units. For more details see Zammit-Mangion et al. (2022) <doi:10.1080/01621459.2021.1887741>, Vu et al. (2022) <doi:10.5705/ss.202020.0156>, Vu et al. (2023) <doi:10.1016/j.spasta.2023.100742>, and Shao et al. (2025) <doi:10.48550/arXiv.2505.12548>.
Downloads the public data available from the Brazilian Access to Information Law and and performs a search on information requests and appeals made since 2015.
Fit of a double additive location-scale model with a nonparametric error distribution from possibly right- or interval censored data. The additive terms in the location and dispersion submodels, as well as the unknown error distribution in the location-scale model, are estimated using Laplace P-splines. For more details, see Lambert (2021) <doi:10.1016/j.csda.2021.107250>.
The DWD provides gridded radar data for Germany in binary format. dwdradar reads these files and enables a fast conversion into numerical format.
This package provides functions for reading DCP and CDF.bin files generated by the dChip software.
Shiny application that performs bifurcation and phaseplane analysis of systems of ordinary differential equations. The package allows for computation of equilibrium curves as a function of a single free parameter, detection of transcritical, saddle-node and hopf bifurcation points along these curves, and computation of curves representing these transcritical, saddle-node and hopf bifurcation points as a function of two free parameters. The shiny-based GUI allows visualization of the results in both 2D- and 3D-plots. The implemented methods for solution localisation and curve continuation are based on the book "Elements of applied bifurcation theory" (Kuznetsov, Y. A., 1995; ISBN: 0-387-94418-4).
Genes that are differentially expressed between two or more experimental conditions can be detected in RNA-Seq. A high biological variability may impact the discovery of these genes once it may be divergent between the fixed effects. However, this variability can be covered by the random effects. DEGRE was designed to identify the differentially expressed genes considering fixed and random effects on individuals. These effects are identified earlier in the experimental design matrix. DEGRE has the implementation of preprocessing procedures to clean the near zero gene reads in the count matrix, normalize by RLE published in the DESeq2 package, Love et al. (2014) <doi:10.1186/s13059-014-0550-8> and it fits a regression for each gene using the Generalized Linear Mixed Model with the negative binomial distribution, followed by a Wald test to assess the regression coefficients.
This package provides a simple syntax to change the default values for function arguments, whether they are in packages or defined locally.
Easy-to-use and efficient interface for Bayesian inference of complex panel (time series) data using dynamic multivariate panel models by Helske and Tikka (2024) <doi:10.1016/j.alcr.2024.100617>. The package supports joint modeling of multiple measurements per individual, time-varying and time-invariant effects, and a wide range of discrete and continuous distributions. Estimation of these dynamic multivariate panel models is carried out via Stan'. For an in-depth tutorial of the package, see (Tikka and Helske, 2024) <doi:10.48550/arXiv.2302.01607>.
Discrete splines are a class of univariate piecewise polynomial functions which are analogous to splines, but whose smoothness is defined via divided differences rather than derivatives. Tools for efficient computations relating to discrete splines are provided here. These tools include discrete differentiation and integration, various matrix computations with discrete derivative or discrete spline bases matrices, and interpolation within discrete spline spaces. These techniques are described in Tibshirani (2020) <doi:10.48550/arXiv.2003.03886>.
Constructs confidence regions without the need to know the sampling distribution of bivariate data. The method was proposed by Zhiqiu Hu & Rong-cai Yang (2013) <doi:10.1371/journal.pone.0081179.g001>.
This package provides tools for describing parameters of algorithms in an abstract way. Description can include an id, a description, a domain (range or list of values), and a default value. dynparam can also convert parameter sets to a ParamHelpers format, in order to be able to use dynparam in conjunction with mlrMBO'.
Functionality for analyzing dose-volume histograms (DVH) in radiation oncology: Read DVH text files, calculate DVH metrics as well as generalized equivalent uniform dose (gEUD), biologically effective dose (BED), equivalent dose in 2 Gy fractions (EQD2), normal tissue complication probability (NTCP), and tumor control probability (TCP). Show DVH diagrams, check and visualize quality assurance constraints for the DVH. Includes web-based graphical user interface.
Includes functions that researchers or practitioners may use to clean raw data, transferring html, xlsx, txt data file into other formats. And it also can be used to manipulate text variables, extract numeric variables from text variables and other variable cleaning processes. It is originated from a author's project which focuses on creative performance in online education environment. The resulting paper of that study will be published soon.
Various methods for the identification of trend and seasonal components in time series (TS) are provided. Among them is a data-driven locally weighted regression approach with automatically selected bandwidth for equidistant short-memory time series. The approach is a combination / extension of the algorithms by Feng (2013) <doi:10.1080/02664763.2012.740626> and Feng, Y., Gries, T., and Fritz, M. (2020) <doi:10.1080/10485252.2020.1759598> and a brief description of this new method is provided in the package documentation. Furthermore, the package allows its users to apply the base model of the Berlin procedure, version 4.1, as described in Speth (2004) <https://www.destatis.de/DE/Methoden/Saisonbereinigung/BV41-methodenbericht-Heft3_2004.pdf?__blob=publicationFile>. Permission to include this procedure was kindly provided by the Federal Statistical Office of Germany.
By systematically aggregating and processing textual reports from earthquakes, floods, storms, wildfires, and other natural disasters, the framework enables a holistic assessment of crisis narratives. Intelligent cleaning and normalization techniques transform raw commentary into structured data, ensuring precise extraction of disaster-specific insights. Collective sentiments of affected communities are quantitatively scored and qualitatively categorized, providing a multifaceted view of societal responses under duress. Interactive geographic maps and temporal charts illustrate the evolution and spatial dispersion of emotional reactions and impact indicators.