Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Google Trends provides cross-sectional and time-series data on searches, but lacks readily available longitudinal data. Researchers, who want to create longitudinal Google Trends on their own, face practical challenges, such as normalized counts that make it difficult to combine cross-sectional and time-series data and limitations in data formats and timelines that limit data granularity over extended time periods. This package addresses these issues and enables researchers to generate longitudinal Google Trends data. This package is built on pytrends', a Python library that acts as the unofficial Google Trends API to collect Google Trends data. As long as the Google Trends API', pytrends and all their dependencies are working, this package will work. During testing, we noticed that for the same input (keyword, topic, data_format, timeline), the output index can vary from time to time. Besides, if the keyword is not very popular, then the resulting dataset will contain a lot of zeros, which will greatly affect the final result. While this package has no control over the accuracy or quality of Google Trends data, once the data is created, this package coverts it to longitudinal data. In addition, the user may encounter a 429 Too Many Requests error when using cross_section() and time_series() to collect Google Trends data. This error indicates that the user has exceeded the rate limits set by the Google Trends API'. For more information about the Google Trends API - pytrends', visit <https://pypi.org/project/pytrends/>.
Perform simultaneous estimation and variable selection for correlated bivariate mixed outcomes (one continuous outcome and one binary outcome per cluster) using penalized generalized estimating equations. In addition, clustered Gaussian and binary outcomes can also be modeled. The SCAD, MCP, and LASSO penalties are supported. Cross-validation can be performed to find the optimal regularization parameter(s).
Run Paris Agreement Capital Transition Assessment ('PACTA') analyses on multiple loan books in a structured way. Provides access to standard PACTA metrics and additional PACTA'-related metrics for multiple loan books. Results take the form of csv files and plots and are exported to user-specified project paths.
Mixtures of Poisson Generalized Linear Models for high dimensional count data clustering. The (multivariate) responses can be partitioned into set of blocks. Three different parameterizations of the linear predictor are considered. The models are estimated according to the EM algorithm with an efficient initialization scheme <doi:10.1016/j.csda.2014.07.005>.
Hybridization probes for target sequences can be made based on melting temperature value calculated by R package TmCalculator <https://CRAN.R-project.org/package=TmCalculator> and methods extended from Beliveau, B. J.,(2018) <doi:10.1073/pnas.1714530115>, and those hybridization probes can be used to capture specific target regions in fluorescence in situ hybridization and next generation sequence experiments.
Utilizes the lme4 and optimx packages (previously the optim() function from stats') to estimate (generalized) linear mixed models (GLMM) with factor structures using a profile likelihood approach, as outlined in Jeon and Rabe-Hesketh (2012) <doi:10.3102/1076998611417628> and Rockwood and Jeon (2019) <doi:10.1080/00273171.2018.1516541>. Factor analysis and item response models can be extended to allow for an arbitrary number of nested and crossed random effects, making it useful for multilevel and cross-classified models.
Compute personal values scores from various questionnaires based on the theoretical constructs proposed by professor Shalom H. Schwartz. Designed for researchers and practitioners in psychology, sociology, and related fields, the package facilitates the quantification and visualization of different dimensions related to personal values from survey data. It incorporates the recommended statistical adjustment to enhance the accuracy and interpretation of the results.
An R-package-version of an open online science-based personality test from <https://openpsychometrics.org/tests/IPIP-BFFM/>, providing a better-designed interface and a more detailed report. The core command launch_test() opens a personality test in your browser, and generates a report after you click "Submit". In this report, your results are compared with other people's, to show what these results mean. Other people's data is from <https://openpsychometrics.org/_rawdata/BIG5.zip>.
Cluster analysis via nonparametric density estimation is performed. Operationally, the kernel method is used throughout to estimate the density. Diagnostics methods for evaluating the quality of the clustering are available. The package includes also a routine to estimate the probability density function obtained by the kernel method, given a set of data with arbitrary dimensions.
Deploy, maintain, and invoke predictive models using the Alteryx Promote REST API. Alteryx Promote is available at the URL: <https://www.alteryx.com/products/alteryx-promote>.
Implementation of T. Hailperin's procedure to calculate lower and upper bounds of the probability for a propositional-logic expression, given equality and inequality constraints on the probabilities for other expressions. Truth-valuation is included as a special case. Applications range from decision-making and probabilistic reasoning, to pedagogical for probability and logic courses. For more details see T. Hailperin (1965) <doi:10.1080/00029890.1965.11970533>, T. Hailperin (1996) "Sentential Probability Logic" ISBN:0-934223-45-9, and package documentation. Requires the lpSolve package.
This package provides standardised functions for quantifying plant disease intensity and disease development over time. The package implements Percent Disease Index (PDI) for assessing overall disease severity based on categorical ratings, Area Under the Disease Progress Curve (AUDPC) for summarizing disease progression using trapezoidal integration, and Relative AUDPC (rAUDPC) for expressing disease development relative to the maximum possible severity over the observation period. These indices are widely used in plant pathology and epidemiology for comparing treatments, cultivars, and environments.
Estimates two-level multilevel linear model and two-level multivariate linear multilevel model with weights following Probability Weighted Iterative Generalised Least Squares approach. For details see Veiga et al.(2014) <doi:10.1111/rssc.12020>.
There are three sets of functions. The first produces basic properties of a graph and generates samples from multinomial distributions to facilitate the simulation functions (they maybe used for other purposes as well). The second provides various simulation functions for a Potts model in Potts, R. B. (1952) <doi:10.1017/S0305004100027419>. The third currently includes only one function which computes the normalizing constant of a Potts model based on simulation results.
Levels and changes of productivity and profitability are measured with various indices. The package contains the multiplicatively complete Färe-Primont, Fisher, Hicks-Moorsteen, Laspeyres, Lowe, and Paasche indices, as well as the classic Malmquist productivity index. Färe-Primont and Lowe indices verify the transitivity property and can therefore be used for multilateral or multitemporal comparison. Fisher, Hicks-Moorsteen, Laspeyres, Malmquist, and Paasche indices are not transitive and are only to be used for binary comparison. All indices can also be decomposed into different components, providing insightful information on the sources of productivity and profitability changes. In the use of Malmquist productivity index, the technological change index can be further decomposed into bias technological change components. The package also allows to prohibit technological regression (negative technological change). In the case of the Fisher, Hicks-Moorsteen, Laspeyres, Paasche and the transitive Färe-Primont and Lowe indices, it is furthermore possible to rule out technological change. Deflated shadow prices can also be obtained. Besides, the package allows parallel computing as an option, depending on the user's computer configuration. All computations are carried out with the nonparametric Data Envelopment Analysis (DEA), and several assumptions regarding returns to scale are available. All DEA linear programs are implemented using lp_solve'.
This package implements optimization techniques for Lasso regression, R.Tibshirani(1996)<doi:10.1111/j.2517-6161.1996.tb02080.x> using Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) and Iterative Shrinkage-Thresholding Algorithm (ISTA) based on proximal operators, A.Beck(2009)<doi:10.1137/080716542>. The package is useful for high-dimensional regression problems and includes cross-validation procedures to select optimal penalty parameters.
Data and statistics of Pakistan Social and Living Standards Measurement (PSLM) survey 2014-15 from Pakistan Bureau of Statistics (<http://www.pbs.gov.pk/>).
Bindings for additional regression models for use with the parsnip package, including ordinary and spare partial least squares models for regression and classification (Rohart et al (2017) <doi:10.1371/journal.pcbi.1005752>).
This package provides methods for building self-organizing maps (SOMs) with a number of distinguishing features such automatic centroid detection and cluster visualization using starbursts. For more details see the paper "Improved Interpretability of the Unified Distance Matrix with Connected Components" by Hamel and Brown (2011) in <ISBN:1-60132-168-6>. The package provides user-friendly access to two models we construct: (a) a SOM model and (b) a centroid based clustering model. The package also exposes a number of quality metrics for the quantitative evaluation of the map, Hamel (2016) <doi:10.1007/978-3-319-28518-4_4>. Finally, we reintroduced our fast, vectorized training algorithm for SOM with substantial improvements. It is about an order of magnitude faster than the canonical, stochastic C implementation <doi:10.1007/978-3-030-01057-7_60>.
This package implements the phinterval vector class for representing time spans that may contain gaps (disjoint intervals) or be empty. This class generalizes the lubridate package's interval class to support vectorized set operations (intersection, union, difference, complement) that always return a valid time span, even when disjoint or empty intervals are created.
ProTracker is a popular music tracker to sequence music on a Commodore Amiga machine. This package offers the opportunity to import, export, manipulate and play ProTracker module files. Even though the file format could be considered archaic, it still remains popular to this date. This package intends to contribute to this popularity and therewith keeping the legacy of ProTracker and the Commodore Amiga alive.
This package implements an n-dimensional parameter space partitioning algorithm for evaluating the global behaviour of formal computational models as described by Pitt, Kim, Navarro and Myung (2006) <doi:10.1037/0033-295X.113.1.57>.
This package provides analytic and simulation tools to estimate the minimum sample size required for achieving a target prediction mean-squared error (PMSE) or a specified proportional PMSE reduction (pPMSEr) in linear regression models. Functions implement the criteria of Ma (2023) <https://digital.wpi.edu/downloads/0g354j58c>, support covariance-matrix handling, and include helpers for root-finding and diagnostic plotting.
This package provides tools for modelling populations and demography using matrix projection models, with deterministic and stochastic model implementations. Includes population projection, indices of short- and long-term population size and growth, perturbation analysis, convergence to stability or stationarity, and diagnostic and manipulation tools.