Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides tools for linear fitting with complex variables. Includes ordinary least-squares (zlm()) and robust M-estimation (rzlm()), and complex methods for oft used generics. Originally adapted from the rlm() functions of MASS and the lm() functions of stats'.
This package provides a collection of synthetic datasets simulating sales transactions from a fictional company. The dataset includes various related tables that contain essential business and operational data, useful for analyzing sales performance and other business insights. Key tables included in the package are: - "sales": Contains data on individual sales transactions, including order details, pricing, quantities, and customer information. - "customer": Stores customer-specific details such as demographics, geographic location, occupation, and birthday. - "store": Provides information about stores, including location, size, status, and operational dates. - "orders": Contains details about customer orders, including order and delivery dates, store, and customer data. - "product": Contains data on products, including attributes such as product name, category, price, cost, and weight. - "calendar": A time-based table that includes date-related attributes like year, month, quarter, day, and working day indicators. This dataset is ideal for practicing data analysis, performing time-series analysis, creating reports, or simulating business intelligence scenarios.
Calculates equitable overload compensation for college instructors based on institutional policies, enrollment thresholds, and regular teaching load limits. Compensation is awarded only for credit hours that exceed the regular load and meet minimum enrollment criteria. When enrollment is below a specified threshold, pay is prorated accordingly. The package prioritizes compensation from high-enrollment courses, or optionally from low-enrollment courses for fairness, depending on user-defined strategy. Includes tools for flexible policy settings, instructor filtering, and produces clean, audit-ready summary tables suitable for payroll and administrative reporting.
Continuous glucose monitoring (CGM) systems provide real-time, dynamic glucose information by tracking interstitial glucose values throughout the day. Glycemic variability, also known as glucose variability, is an established risk factor for hypoglycemia (Kovatchev) and has been shown to be a risk factor in diabetes complications. Over 20 metrics of glycemic variability have been identified. Here, we provide functions to calculate glucose summary metrics, glucose variability metrics (as defined in clinical publications), and visualizations to visualize trends in CGM data. Cho P, Bent B, Wittmann A, et al. (2020) <https://diabetes.diabetesjournals.org/content/69/Supplement_1/73-LB.abstract> American Diabetes Association (2020) <https://professional.diabetes.org/diapro/glucose_calc> Kovatchev B (2019) <doi:10.1177/1932296819826111> Kovdeatchev BP (2017) <doi:10.1038/nrendo.2017.3> Tamborlane W V., Beck RW, Bode BW, et al. (2008) <doi:10.1056/NEJMoa0805017> Umpierrez GE, P. Kovatchev B (2018) <doi:10.1016/j.amjms.2018.09.010>.
This package provides a modeling tool allowing gene selection, reverse engineering, and prediction in cascade networks. Jung, N., Bertrand, F., Bahram, S., Vallat, L., and Maumy-Bertrand, M. (2014) <doi:10.1093/bioinformatics/btt705>.
Identification and visualization of groups of closely spaced mutations in the DNA sequence of cancer genome. The extremely mutated zones are searched in the symmetric dissimilarity matrix using the anti-Robinson matrix properties. Different data sets are obtained to describe and plot the clustered mutations information.
Computes the uniform rate of profit, the vector of price of production and the vector of labor values; and also compute measures of deviation between relative prices of production and relative values. <https://scholarworks.umass.edu/econ_workingpaper/347/>. You provide the input-output data and clptheory does the calculations for you.
Tests on properties of space-time covariance functions. Tests on symmetry, separability and for assessing different forms of non-separability are available. Moreover tests on some classes of covariance functions, such that the classes of product-sum models, Gneiting models and integrated product models have been provided. It is the companion R package to the papers of Cappello, C., De Iaco, S., Posa, D., 2018, Testing the type of non-separability and some classes of space-time covariance function models <doi:10.1007/s00477-017-1472-2> and Cappello, C., De Iaco, S., Posa, D., 2020, covatest: an R package for selecting a class of space-time covariance functions <doi:10.18637/jss.v094.i01>.
This package provides a simple runner for fuzz-testing functions in an R package's public interface. Fuzz testing helps identify functions lacking sufficient argument validation, and uncovers problematic inputs that, while valid by function signature, may cause issues within the function body.
This package provides functions for computing the density and the log-likelihood function of closed-skew normal variates, and for generating random vectors sampled from this distribution. See Gonzalez-Farias, G., Dominguez-Molina, J., and Gupta, A. (2004). The closed skew normal distribution, Skew-elliptical distributions and their applications: a journey beyond normality, Chapman and Hall/CRC, Boca Raton, FL, pp. 25-42.
Parameter estimation, one-step ahead forecast and new location prediction methods for spatio-temporal data.
Semiparametric estimation for censored time series with lower detection limit. The latent response is a sequence of stationary process with Markov property of order one. Estimation of copula parameter(COPC) and Conditional quantile estimation are included for five available copula functions. Copula selection methods based on L2 distance from empirical copula function are also included.
Conditioned Latin hypercube sampling, as published by Minasny and McBratney (2006) <DOI:10.1016/j.cageo.2005.12.009>. This method proposes to stratify sampling in presence of ancillary data. An extension of this method, which propose to associate a cost to each individual and take it into account during the optimisation process, is also proposed (Roudier et al., 2012, <DOI:10.1201/b12728>).
This package provides a curated list of copepod-fish ecological interaction records. It contains the taxonomy of the copepod and the fish and the publication from which the information was obtained. This database contains only marine and brackish water fish species. It excludes fish species that inhabit only freshwater.
There are several non-functional-form-based interaction tests for testing interaction in unreplicated two-way layouts. However, no single test can detect all patterns of possible interaction and the tests are sensitive to a particular pattern of interaction. This package combines six non-functional-form-based interaction tests for testing additivity. These six tests were proposed by Boik (1993) <doi:10.1080/02664769300000004>, Piepho (1994), Kharrati-Kopaei and Sadooghi-Alvandi (2007) <doi:10.1080/03610920701386851>, Franck et al. (2013) <doi:10.1016/j.csda.2013.05.002>, Malik et al. (2016) <doi:10.1080/03610918.2013.870196> and Kharrati-Kopaei and Miller (2016) <doi:10.1080/00949655.2015.1057821>. The p-values of these six tests are combined by Bonferroni, Sidak, Jacobi polynomial expansion, and the Gaussian copula methods to provide researchers with a testing approach which leverages many existing methods to detect disparate forms of non-additivity. This package is based on the following published paper: Shenavari and Kharrati-Kopaei (2018) "A Method for Testing Additivity in Unreplicated Two-Way Layouts Based on Combining Multiple Interaction Tests". In addition, several sentences in help files or descriptions were copied from that paper.
This package provides methods for interpreting CoDa (Compositional Data) regression models along the lines of "Pairwise share ratio interpretations of compositional regression models" (Dargel and Thomas-Agnan 2024) <doi:10.1016/j.csda.2024.107945>. The new methods include variation scenarios, elasticities, elasticity differences and share ratio elasticities. These tools are independent of log-ratio transformations and allow an interpretation in the original space of shares. CoDaImpact is designed to be used with the compositions package and its ecosystem.
Tests, utilities, and case studies for analyzing significance in clustered binary matched-pair data. The central function clust.bin.pair uses one of several tests to calculate a Chi-square statistic. Implemented are the tests Eliasziw (1991) <doi:10.1002/sim.4780101211>, Obuchowski (1998) <doi:10.1002/(SICI)1097-0258(19980715)17:13%3C1495::AID-SIM863%3E3.0.CO;2-I>, Durkalski (2003) <doi:10.1002/sim.1438>, and Yang (2010) <doi:10.1002/bimj.201000035> with McNemar (1947) <doi:10.1007/BF02295996> included for comparison. The utility functions nested.to.contingency and paired.to.contingency convert data between various useful formats. Thyroids and psychiatry are the canonical datasets from Obuchowski and Petryshen (1989) <doi:10.1016/0165-1781(89)90196-0> respectively.
Set of forecasting tools to predict ICU beds using a Vector Error Correction model with a single cointegrating vector. Method described in Berta, P. Lovaglio, P.G. Paruolo, P. Verzillo, S., 2020. "Real Time Forecasting of Covid-19 Intensive Care Units demand" Health, Econometrics and Data Group (HEDG) Working Papers 20/16, HEDG, Department of Economics, University of York, <https://www.york.ac.uk/media/economics/documents/hedg/workingpapers/2020/2016.pdf>.
This package provides constructions of series of partially balanced incomplete block designs (PBIB) based on the combinatory method S, introduced by Rezgui et al. (2014) <doi:10.3844/jmssp.2014.45.48>. This package also offers the associated U-type designs. Version 1.1-1 generalizes the approach to designs with v = wnl treatments. It includes various rectangular and generalized rectangular right angular association schemes with 4, 5, and 7 associated classes.
This is a simple R package that allows to measure the stated preferences using traditional conjoint analysis method.
Classification of climate according to Koeppen - Geiger, of aridity indices, of continentality indices, of water balance after Thornthwaite, of viticultural bioclimatic indices. Drawing climographs: Thornthwaite, Peguy, Bagnouls-Gaussen.
Solves optimal pairing and matching problems using linear assignment algorithms. Provides implementations of the Hungarian method (Kuhn 1955) <doi:10.1002/nav.3800020109>, Jonker-Volgenant shortest path algorithm (Jonker and Volgenant 1987) <doi:10.1007/BF02278710>, Auction algorithm (Bertsekas 1988) <doi:10.1007/BF02186476>, cost-scaling (Goldberg and Kennedy 1995) <doi:10.1007/BF01585996>, scaling algorithms (Gabow and Tarjan 1989) <doi:10.1137/0218069>, push-relabel (Goldberg and Tarjan 1988) <doi:10.1145/48014.61051>, and Sinkhorn entropy-regularized transport (Cuturi 2013) <doi:10.48550/arxiv.1306.0895>. Designed for matching plots, sites, samples, or any pairwise optimization problem. Supports rectangular matrices, forbidden assignments, data frame inputs, batch solving, k-best solutions, and pixel-level image morphing for visualization. Includes automatic preprocessing with variable health checks, multiple scaling methods (standardized, range, robust), greedy matching algorithms, and comprehensive balance diagnostics for assessing match quality using standardized differences and distribution comparisons.
Shiny app for creating interactive consort flow diagrams and other types of flow diagrams, see Moher, Schulz and Altman (2001) <doi:10.1016/S0140-6736(00)04337-3>.
This package provides a tiny package to generate CRediT author statements (<https://credit.niso.org/>). It provides three functions: create a template, read it back and generate the CRediT author statement in a text file.