Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
This package performs sparse discriminant analysis on a combination of node and leaf predictors when the predictor variables are structured according to a tree, as described in Fukuyama et al. (2017) <doi:10.1371/journal.pcbi.1005706>.
This package provides functions to build interactive dashboards combining the Tabler UI Kit with Shiny', making it easy to create professional-looking web applications. Tabler is fully responsive and compatible with all modern browsers. Offers customizable layouts and components built with HTML5 and CSS3'. The underlying Tabler (<https://github.com/tabler/tabler>) and Tabler Icons (<https://github.com/tabler/tabler-icons>) were pre-built from source to eliminate the need for Node.js and NPM on package installation.
Datasets from Yotov, et al. (2016, ISBN:978-92-870-4367-2) "An Advanced Guide to Trade Policy Analysis" and functions to report regression summaries with clustered robust standard errors.
This package provides a hypothesis test and variable selection algorithm for use in time-varying, concurrent regression models. The hypothesis test function is also accompanied by a plotting function which will show the estimated beta(s) and confidence band(s) from the hypothesis test. The hypothesis test function helps the user identify significant covariates within the scope of a time-varying concurrent model. The plots will show the amount of area that falls outside the confidence band(s) which is used for the test statistic within the hypothesis test.
In Cox's proportional hazard model, covariates are modeled as linear function and may not be flexible. This package implements additive trend filtering Cox proportional hazards model as proposed in Jiacheng Wu & Daniela Witten (2019) "Flexible and Interpretable Models for Survival Data", Journal of Computational and Graphical Statistics, <DOI:10.1080/10618600.2019.1592758>. The fitted functions are piecewise polynomial with adaptively chosen knots.
This package implements nonlinear autoregressive (AR) time series models. For univariate series, a non-parametric approach is available through additive nonlinear AR. Parametric modeling and testing for regime switching dynamics is available when the transition is either direct (TAR: threshold AR) or smooth (STAR: smooth transition AR, LSTAR). For multivariate series, one can estimate a range of TVAR or threshold cointegration TVECM models with two or three regimes. Tests can be conducted for TVAR as well as for TVECM (Hansen and Seo 2002 and Seo 2006).
Prediction intervals for ARIMA and structural time series models using importance sampling approach with uninformative priors for model parameters, leading to more accurate coverage probabilities in frequentist sense. Instead of sampling the future observations and hidden states of the state space representation of the model, only model parameters are sampled, and the method is based solving the equations corresponding to the conditional coverage probability of the prediction intervals. This makes method relatively fast compared to for example MCMC methods, and standard errors of prediction limits can also be computed straightforwardly.
Efficient tabulation with Stata-like output. For each unique value of the variable, it shows the number of observations with that value, proportion of observations with that value, and cumulative proportion, in descending order of frequency. Accepts data.table, tibble, or data.frame as input. Efficient with big data: if you give it a data.table, tab() uses data.table syntax.
The goal of tidyplots is to streamline the creation of publication-ready plots for scientific papers. It allows to gradually add, remove and adjust plot components using a consistent and intuitive syntax.
Set of functions designed to help in the analysis of TDP sensors. Features includes dates and time conversion, weather data interpolation, daily maximum of tension analysis and calculations required to convert sap flow density data to sap flow rates at the tree and plot scale (For more information see : Granier (1985) <DOI:10.1051/forest:19850204> & Granier (1987) <DOI:10.1093/treephys/3.4.309>).
The main function of the package aims to update lmer()'/'glmer() models depending on their warnings, so trying to avoid convergence and singularity problems.
Hospitals, hospital systems, and even trauma systems that provide care to injured patients may not be aware of robust metrics that can help gauge the efficacy of their programs in saving the lives of injured patients. traumar provides robust functions driven by the academic literature to automate the calculation of relevant metrics to individuals desiring to measure the performance of their trauma center or even a trauma system. traumar also provides some helper functions for the data analysis journey. Users can refer to the following publications for descriptions of the methods used in traumar'. TRISS methodology, including probability of survival, and the W, M, and Z Scores - Flora (1978) <doi:10.1097/00005373-197810000-00003>, Boyd et al. (1987, PMID:3106646), Llullaku et al. (2009) <doi:10.1186/1749-7922-4-2>, Singh et al. (2011) <doi:10.4103/0974-2700.86626>, Baker et al. (1974, PMID:4814394), and Champion et al. (1989) <doi:10.1097/00005373-198905000-00017>. For the Relative Mortality Metric, see Napoli et al. (2017) <doi:10.1080/24725579.2017.1325948>, Schroeder et al. (2019) <doi:10.1080/10903127.2018.1489021>, and Kassar et al. (2016) <doi:10.1177/00031348221093563>. For more information about methods to calculate over- and under-triage in trauma hospital populations and samples, please see the following publications - Peng & Xiang (2016) <doi:10.1016/j.ajem.2016.08.061>, Beam et al. (2022) <doi:10.23937/2474-3674/1510136>, Roden-Foreman et al. (2017) <doi:10.1097/JTN.0000000000000283>.
Characterisation of the extremal dependence structure of time series, avoiding pre-processing and filtering as done typically with peaks-over-threshold methods. It uses the conditional approach of Heffernan and Tawn (2004) <DOI:10.1111/j.1467-9868.2004.02050.x> which is very flexible in terms of extremal and asymptotic dependence structures, and Bayesian methods improve efficiency and allow for deriving measures of uncertainty. For example, the extremal index, related to the size of clusters in time, can be estimated and samples from its posterior distribution obtained.
General framework to organize data, methods, and results used in reproducible scientific analyses. A TAF analysis consists of four scripts (data.R, model.R, output.R, report.R) that are run sequentially. Each script starts by reading files from a previous step and ends with writing out files for the next step. Convenience functions are provided to version control the required data and software, run analyses, clean residues from previous runs, manage files, manipulate tables, and produce figures. With a focus on stability and reproducible analyses, the TAF package comes with no dependencies. TAF forms a base layer for the icesTAF package and other scientific applications.
This package provides functions for point and interval estimation in error-in-variables models via total least squares or generalized total least squares method. See Golub and Van Loan (1980) <doi:10.1137/0717073>, Gleser (1981) <https://www.jstor.org/stable/2240867>, Ivan Markovsky and Huffel (2007) <doi:10.1016/j.sigpro.2007.04.004> for more information.
This package produces weighted cross-tabulation tables for one or more outcome variables across one or more breakdown variables, and exports them directly to Excel'. For each outcome-by-breakdown combination, the package creates a weighted percentage table and a corresponding unweighted count table, with transparent handling of missing values and light, readable formatting. Designed to support social survey analysis workflows that require large sets of consistent, publication-ready tables.
Unit testing is a solid component of automated CI/CD pipelines. tinytest - a lightweight, zero-dependency alternative to testthat was developed. To be able to integrate tinytests results into common CI/CD systems the test results from tinytest need to be caputred and converted to JUnit XML format. tinytest2JUnit enables this conversion while staying also lightweight and only have tinytest as its dependency.
This package provides functions for the selection of thresholds for use in extreme value models, based mainly on the methodology in Northrop, Attalides and Jonathan (2017) <doi:10.1111/rssc.12159>. It also performs predictive inferences about future extreme values, based either on a single threshold or on a weighted average of inferences from multiple thresholds, using the revdbayes package <https://cran.r-project.org/package=revdbayes>. At the moment only the case where the data can be treated as independent identically distributed observations is considered.
Extensions to lattice', providing new high-level functions, methods for existing functions, panel functions, and a theme.
Efficient sampling of truncated multivariate (scale) mixtures of normals under linear inequality constraints is nontrivial due to the analytically intractable normalizing constant. Meanwhile, traditional methods may subject to numerical issues, especially when the dimension is high and dependence is strong. Algorithms proposed by Li and Ghosh (2015) <doi: 10.1080/15598608.2014.996690> are adopted for overcoming difficulties in simulating truncated distributions. Efficient rejection sampling for simulating truncated univariate normal distribution is included in the package, which shows superiority in terms of acceptance rate and numerical stability compared to existing methods and R packages. An efficient function for sampling from truncated multivariate normal distribution subject to convex polytope restriction regions based on Gibbs sampler for conditional truncated univariate distribution is provided. By extending the sampling method, a function for sampling truncated multivariate Student's t distribution is also developed. Moreover, the proposed method and computation remain valid for high dimensional and strong dependence scenarios. Empirical results in Li and Ghosh (2015) <doi: 10.1080/15598608.2014.996690> illustrated the superior performance in terms of various criteria (e.g. mixing and integrated auto-correlation time).
This package provides functions for estimation of wood volumes, number of logs, diameters along the stem and heights at which certain diameters occur, based on taper functions and other parameters. References: McTague, J. P., & Weiskittel, A. (2021). <doi:10.1139/cjfr-2020-0326>.
This package provides access to the complete Pali Canon, or Tipitaka, the canonical scripture for Theravadin Buddhists worldwide. Based on the Chattha Sangayana Tipitaka version 4 (Vipassana Research Institute, 1990). Includes word frequency data and tools for Pali string sorting. For a lemmatized critical edition with sutta-level granularity, see the companion package tipitaka.critical'.
Translate R control flow expressions into Tensorflow graphs.
TEMPoral TEnsor Decomposition (TEMPTED), is a dimension reduction method for multivariate longitudinal data with varying temporal sampling. It formats the data into a temporal tensor and decomposes it into a summation of low-dimensional components, each consisting of a subject loading vector, a feature loading vector, and a continuous temporal loading function. These loadings provide a low-dimensional representation of subjects or samples and can be used to identify features associated with clusters of subjects or samples. TEMPTED provides the flexibility of allowing subjects to have different temporal sampling, so time points do not need to be binned, and missing time points do not need to be imputed.