Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package provides a R driver for Apache Drill<https://drill.apache.org>, which could connect to the Apache Drill cluster<https://drill.apache.org/docs/installing-drill-on-the-cluster> or drillbit<https://drill.apache.org/docs/embedded-mode-prerequisites> and get result(in data frame) from the SQL query and check the current configuration status. This link <https://drill.apache.org/docs> contains more information about Apache Drill.
This package provides functions providing an easy and intuitive way for fitting and clusters data using the Mixture of Unigrams models by means the Expectation-Maximization algorithm (Nigam, K. et al. (2000). <doi:10.1023/A:1007692713085>), Mixture of Dirichlet-Multinomials estimated by Gradient Descent (Anderlucci, Viroli (2020) <doi:10.1007/s11634-020-00399-3>) and Deep Mixture of Multinomials whose estimates are obtained with Gibbs sampling scheme (Viroli, Anderlucci (2020) <doi:10.1007/s11222-020-09989-9>). There are also functions for graphical representation of clusters obtained.
Implementation of DetMCD, a new algorithm for robust and deterministic estimation of location and scatter. The benefits of robust and deterministic estimation are explained in Hubert, Rousseeuw and Verdonck (2012) <doi:10.1080/10618600.2012.672100>.
Calculates expected values, variance, different moments (kth moment, truncated mean), stop-loss, mean excess loss, Value-at-Risk (VaR) and Tail Value-at-Risk (TVaR) as well as some density and cumulative (survival) functions of continuous, discrete and compound distributions. This package also includes a visual Shiny component to enable students to visualize distributions and understand the impact of their parameters. This package is intended to expand the stats package so as to enable students to develop an intuition for probability.
Analyze and visualize the rhythmic behavior of animals using the degree of functional coupling (See Scheibe (1999) <doi:10.1076/brhm.30.2.216.1420>), compute and visualize harmonic power, actograms, average activity and diurnality index.
Perform model selection using distribution and probability-based methods, including standardized AIC, BIC, and AICc. These standardized information criteria allow one to perform model selection in a way similar to the prevalent "Rule of 2" method, but formalize the method to rely on probability theory. A novel goodness-of-fit procedure for assessing linear regression models is also available. This test relies on theoretical properties of the estimated error variance for a normal linear regression model, and employs a bootstrap procedure to assess the null hypothesis that the fitted model shows no lack of fit. For more information, see Koeneman and Cavanaugh (2023) <arXiv:2309.10614>. Functionality to perform all subsets linear or generalized linear regression is also available.
Gives you the ability to use arbitrary Docker images (including custom ones) to process Rmarkdown code chunks.
Prepare the results of a DCE to be analysed through choice models.'DCEmgmt reshapes DCE data from wide to long format considering the special characteristics of a DCE. DCEmgmt includes the function DCEestm which estimates choice models once the database has been reshaped with DCEmgmt'.
This package provides a domain-specific language for specifying translating recursions into dynamic-programming algorithms. See <https://en.wikipedia.org/wiki/Dynamic_programming> for a description of dynamic programming.
Dynamic Reservoir Simulation Model (DYRESM) and Computational Aquatic Ecosystem Dynamics Model (CAEDYM) model development, including assisting with calibrating selected model parameters and visualising model output through time series plot, profile plot, contour plot, and scatter plot. For more details, see Yu et al. (2023) <https://journal.r-project.org/articles/RJ-2023-008/>.
Likelihood-based inference for skewed count distributions, typically of degrees used in network modeling. "degreenet" is a part of the "statnet" suite of packages for network analysis. See Jones and Handcock <doi:10.1098/rspb.2003.2369>.
Este pacote traduz os seguintes conjuntos de dados: airlines', airports', ames_raw', AwardsManagers', babynames', Batting', diamonds', faithful', fueleconomy', Fielding', flights', gapminder', gss_cat', iris', Managers', mpg', mtcars', atmos', penguins', People, Pitching', pixarfilms','planes', presidential', table1', table2', table3', table4a', table4b', table5', vehicles', weather', who'. English: It provides a Portuguese translated version of the datasets listed above.
This package provides a set of algorithms based on Quinn et al. (1991) <doi:10.1002/hyp.3360050106> for processing river network and digital elevation data to build implementations of Dynamic TOPMODEL, a semi-distributed hydrological model proposed in Beven and Freer (2001) <doi:10.1002/hyp.252>. The dynatop package implements simulation code for Dynamic TOPMODEL based on the output of dynatopGIS'.
Converting date ranges into dating steps eases the visualization of changes in e.g. pottery consumption, style and other variables over time. This package provides tools to process and prepare data for visualization and employs the concept of aoristic analysis.
The df2yaml aims to simplify the process of converting dataframe to YAML <https://yaml.org/>. The dataframe with multiple key columns and one value column will be converted to the multi-level hierarchy.
Bayesian networks with continuous and/or discrete variables can be learned and compared from data. The method is described in Boettcher and Dethlefsen (2003), <doi:10.18637/jss.v008.i20>.
Three global value chain (GVC) decompositions are implemented. The Leontief decomposition derives the value added origin of exports by country and industry as in Hummels, Ishii and Yi (2001). The Koopman, Wang and Wei (2014) decomposition splits country-level exports into 9 value added components, and the Wang, Wei and Zhu (2013) decomposition splits bilateral exports into 16 value added components. Various GVC indicators based on these decompositions are computed in the complimentary gvc package. --- References: --- Hummels, D., Ishii, J., & Yi, K. M. (2001). The nature and growth of vertical specialization in world trade. Journal of international Economics, 54(1), 75-96. Koopman, R., Wang, Z., & Wei, S. J. (2014). Tracing value-added and double counting in gross exports. American Economic Review, 104(2), 459-94. Wang, Z., Wei, S. J., & Zhu, K. (2013). Quantifying international production sharing at the bilateral and sector levels (No. w19677). National Bureau of Economic Research.
This package provides a wrapper for the ZEIT ONLINE Content API, available at <http://developer.zeit.de>. diezeit gives access to articles and corresponding metadata from the ZEIT archive and from ZEIT ONLINE. A personal API key is required for usage.
Loads behavioural data from the widely used Drosophila Activity Monitor System (DAMS, TriKinetics <https://trikinetics.com/>) into the rethomics framework.
An interactive image editing tool that can be added as part of the HTML in Shiny, R markdown or any type of HTML document. Often times, plots, photos are embedded in the web application/file. drawer can take screenshots of these image-like elements, or any part of the HTML document and send to an image editing space called canvas to allow users immediately edit the screenshot(s) within the same document. Users can quickly combine, compare different screenshots, upload their own images and maybe make a scientific figure.
It generates summary statistics on the input dataset using different descriptive univariate statistical measures on entire data or at a group level. Though there are other packages which does similar job but each of these are deficient in one form or other, in the measures generated, in treating numeric, character and date variables alike, no functionality to view these measures on a group level or the way the output is represented. Given the foremost role of the descriptive statistics in any of the exploratory data analysis or solution development, there is a need for a more constructive, structured and refined version over these packages. This is the idea behind the package and it brings together all the required descriptive measures to give an initial understanding of the data quality, distribution in a faster,easier and elaborative way.The function brings an additional capability to be able to generate these statistical measures on the entire dataset or at a group level. It calculates measures of central tendency (mean, median), distribution (count, proportion), dispersion (min, max, quantile, standard deviation, variance) and shape (skewness, kurtosis). Addition to these measures, it provides information on the data type, count on no. of rows, unique entries and percentage of missing entries. More importantly the measures are generated based on the data types as required by them,rather than applying numerical measures on character and data variables and vice versa. Output as a dataframe object gives a very neat representation, which often is useful when working with a large number of columns. It can easily be exported as csv and analyzed further or presented as a summary report for the data.
Compares the fit of alternative models of continuous trait differentiation between sister species and other paired lineages. Differences in trait means between two lineages arise as they diverge from a common ancestor, and alternative processes of evolutionary divergence are expected to leave unique signatures in the distribution of trait differentiation in datasets comprised of many lineage pairs. Models include approximations of divergent selection, drift, and stabilizing selection. A variety of model extensions facilitate the testing of process-to-pattern hypotheses. Users supply trait data and divergence times for each lineage pair. The fit of alternative models is compared in a likelihood framework.
Mechanisms to parallelize dependent tasks in a manner that optimizes the compute resources available. It provides access to "delayed" computations, which may be parallelized using futures. It is, to an extent, a facsimile of the Dask library (<https://www.dask.org/>), for the Python language.
This package performs Bayesian posterior inference for deep Gaussian processes following Sauer, Gramacy, and Higdon (2023, <doi:10.48550/arXiv.2012.08015>). See Sauer (2023, <http://hdl.handle.net/10919/114845>) for comprehensive methodological details and <https://bitbucket.org/gramacylab/deepgp-ex/> for a variety of coding examples. Models are trained through MCMC including elliptical slice sampling of latent Gaussian layers and Metropolis-Hastings sampling of kernel hyperparameters. Gradient-enhancement and gradient predictions are offered following Booth (2025, <doi:10.48550/arXiv.2512.18066>). Vecchia approximation for faster computation is implemented following Sauer, Cooper, and Gramacy (2023, <doi:10.48550/arXiv.2204.02904>). Optional monotonic warpings are implemented following Barnett et al. (2025, <doi:10.48550/arXiv.2408.01540>). Downstream tasks include sequential design through active learning Cohn/integrated mean squared error (ALC/IMSE; Sauer, Gramacy, and Higdon, 2023), optimization through expected improvement (EI; Gramacy, Sauer, and Wycoff, 2022, <doi:10.48550/arXiv.2112.07457>), and contour location through entropy (Booth, Renganathan, and Gramacy, 2025, <doi:10.48550/arXiv.2308.04420>). Models extend up to three layers deep; a one layer model is equivalent to typical Gaussian process regression. Incorporates OpenMP and SNOW parallelization and utilizes C/C++ under the hood.