Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Clustered or multilevel data structures are common in the assessment of differential item functioning (DIF), particularly in the context of large-scale assessment programs. This package allows users to implement extensions of the Mantel-Haenszel DIF detection procedures in the presence of multilevel data based on the work of Begg (1999) <doi:10.1111/j.0006-341X.1999.00302.x>, Begg & Paykin (2001) <doi:10.1080/00949650108812115>, and French & Finch (2013) <doi:10.1177/0013164412472341>.
Connect to the DocuSign Rest API <https://www.docusign.com/p/RESTAPIGuide/RESTAPIGuide.htm>, which supports embedded signing, and sending of documents.
Given count data from two conditions, it determines which transcripts are differentially expressed across the two conditions using Bayesian inference of the parameters of a bottom-up model for PCR amplification. This model is developed in Ndifon Wilfred, Hilah Gal, Eric Shifrut, Rina Aharoni, Nissan Yissachar, Nir Waysbort, Shlomit Reich Zeliger, Ruth Arnon, and Nir Friedman (2012), <http://www.pnas.org/content/109/39/15865.full>, and results in a distribution for the counts that is a superposition of the binomial and negative binomial distribution.
Identifies, filters and exports sex linked markers using SNP (single nucleotide polymorphism) data. To install the other packages, we recommend to install the dartRverse package, that supports the installation of all packages in the dartRverse'. If you want understand the applied rational to identify sexlinked markers and/or want to cite dartR.sexlinked', you find the information by typing citation('dartR.sexlinked') in the console.
The FBED and mmpc variable selection algorithms have been implemented using the distance correlation. The references include: Tsamardinos I., Aliferis C. F. and Statnikov A. (2003). "Time and sample efficient discovery of Markovblankets and direct causal relations". In Proceedings of the ninth ACM SIGKDD international Conference. <doi:10.1145/956750.956838>. Borboudakis G. and Tsamardinos I. (2019). "Forward-backward selection with early dropping". Journal of Machine Learning Research, 20(8): 1--39. <doi:10.48550/arXiv.1705.10770>. Huo X. and Szekely G.J. (2016). "Fast computing for distance covariance". Technometrics, 58(4): 435--447. <doi:10.1080/00401706.2015.1054435>.
This package provides the user with an interactive application which can be used to facilitate the planning of dose finding studies by applying the theory of optimal experimental design.
Useful functions for various DDI (Data Documentation Initiative) related inputs and outputs. Converts data files to and from DDI, SPSS, Stata, SAS, R and Excel, including user declared missing values.
This package provides a collection of novel tools for generating species distribution and abundance models (SDM) that are dynamic through both space and time. These highly flexible functions incorporate spatial and temporal aspects across key SDM stages; including when cleaning and filtering species occurrence data, generating pseudo-absence records, assessing and correcting sampling biases and autocorrelation, extracting explanatory variables and projecting distribution patterns. Throughout, functions utilise Google Earth Engine and Google Drive to minimise the computing power and storage demands associated with species distribution modelling at high spatio-temporal resolution.
Fast distributed/parallel estimation for multinomial logistic regression via Poisson factorization and the gamlr package. For details see: Taddy (2015, AoAS), Distributed Multinomial Regression, <doi:10.48550/arXiv.1311.6139>.
Edit and validate taxonomic data in compliance with Darwin Core standards (Darwin Core Taxon class <https://dwc.tdwg.org/terms/#taxon>).
This package implements common measures of diversity and spatial segregation. This package has tools to compute the majority of measures are reviewed in Massey and Denton (1988) <doi:10.2307/2579183>. Multiple common measures of within-geography diversity are implemented as well. All functions operate on data frames with a tidyselect based workflow.
The natural increase in the complexity of current research experiments and data demands better tools to enhance productivity in Data Analytics. The package is a framework designed to address the modern challenges in data analytics workflows. The package is inspired by Experiment Line concepts. It aims to provide seamless support for users in developing their data mining workflows by offering a uniform data model and method API. It enables the integration of various data mining activities, including data preprocessing, classification, regression, clustering, and time series prediction. It also offers options for hyper-parameter tuning and supports integration with existing libraries and languages. Overall, the package provides researchers with a comprehensive set of functionalities for data science, promoting ease of use, extensibility, and integration with various tools and libraries. Information on Experiment Line is based on Ogasawara et al. (2009) <doi:10.1007/978-3-642-02279-1_20>.
This package contains functions to help with generating tables with descriptive statistics. In addition, the package can display results of statistical tests for group comparisons. A wide range of test procedures is supported, and user-defined test functions can be incorporated.
Re-arranges a dendrogram to optimize visualisation-based cost functions.
Package including an interactive Shiny application for plotting common univariate distributions.
Estimation of dark diversity and site-specific species pools using species co-occurrences. It includes implementations of probabilistic dark diversity based on the Hypergeometric distribution, as well as estimations based on the Beals index, which can be transformed to binary predictions using different thresholds, or transformed into a favorability index. All methods include the possibility of using a calibration dataset that is used to estimate the indication matrix between pairs of species, or to estimate dark diversity directly on a single dataset. See De Caceres and Legendre (2008) <doi:10.1007/s00442-008-1017-y>, Lewis et al. (2016) <doi:10.1111/2041-210X.12443>, Partel et al. (2011) <doi:10.1016/j.tree.2010.12.004>, Real et al. (2017) <doi:10.1093/sysbio/syw072> for further information.
With bivariate data, it is possible to calculate 2-dimensional kernel density estimates that return polygons at given levels of probability. densityarea returns these polygons for analysis, including for calculating their area.
Constructs confidence regions without the need to know the sampling distribution of bivariate data. The method was proposed by Zhiqiu Hu & Rong-cai Yang (2013) <doi:10.1371/journal.pone.0081179.g001>.
Consider ambiguity in probabilistic descriptions by replacing a parametric probabilistic description of uncertainty by a non-parametric set of probability distributions in the form of a Density Ratio Class. This is of particular interest in Bayesian inference. The Density Ratio Class is particularly suited for this purpose as it is invariant under Bayesian inference, marginalization, and propagation through a deterministic model. Here, invariant means that the result of the operation applied to a Density Ratio Class is again a Density Ratio Class. In particular the invariance under Bayesian inference thus enables iterative learning within the same framework of Density Ratio Classes. The use of imprecise probabilities in general, and Density Ratio Classes in particular, lead to intervals of characteristics of probability distributions, such as cumulative distribution functions, quantiles, and means. The package is based on a sample of the distribution proportional to the upper bound of the class. Typically this will be a sample from the posterior in Bayesian inference. Based on such a sample, the package provides functions to calculate lower and upper class boundaries and lower and upper bounds of cumulative distribution functions, and quantiles. Rinderknecht, S.L., Albert, C., Borsuk, M.E., Schuwirth, N., Kuensch, H.R. and Reichert, P. (2014) "The effect of ambiguous prior knowledge on Bayesian model parameter inference and prediction." Environmental Modelling & Software. 62, 300-315, 2014. <doi:10.1016/j.envsoft.2014.08.020>. Sriwastava, A. and Reichert, P. "Robust Bayesian Estimation of Value Function Parameters using Imprecise Priors." Submitted. <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4973574>.
This package provides functions to manage databases: select, update, insert, and delete records, list tables, backup tables as CSV files, and import CSV files as tables.
Several tests for differential methylation in methylation array data, including one-sided differential mean and variance test. Methods used in the package refer to Dai, J, Wang, X, Chen, H and others (2021) "Incorporating increased variability in discovering cancer methylation markers", Biostatistics, submitted.
An integrated toolset for the analysis of de novo (sporadic) genetic sequence variants. denovolyzeR implements a mutational model that estimates the probability of a de novo genetic variant arising in each human gene, from which one can infer the expected number of de novo variants in a given population size. Observed variant frequencies can then be compared against expectation in a Poisson framework. denovolyzeR provides a suite of functions to implement these analyses for the interpretation of de novo variation in human disease.
Work within the dplyr workflow to add random variates to your data frame. Variates can be added at any level of an existing column. Also, bounds can be specified for simulated variates.
Efficiently and flexibly preprocess data using a set of data filtering, deletion, and interpolation tools. These data preprocessing methods are developed based on the principles of completeness, accuracy, threshold method, and linear interpolation and through the setting of constraint conditions, time completion & recovery, and fast & efficient calculation and grouping. Key preprocessing steps include deletions of variables and observations, outlier removal, and missing values (NA) interpolation, which are dependent on the incomplete and dispersed degrees of raw data. They clean data more accurately, keep more samples, and add no outliers after interpolation, compared with ordinary methods. Auto-identification of consecutive NA via run-length based grouping is used in observation deletion, outlier removal, and NA interpolation; thus, new outliers are not generated in interpolation. Conditional extremum is proposed to realize point-by-point weighed outlier removal that saves non-outliers from being removed. Plus, time series interpolation with values to refer to within short periods further ensures reliable interpolation. These methods are based on and improved from the reference: Liang, C.-S., Wu, H., Li, H.-Y., Zhang, Q., Li, Z. & He, K.-B. (2020) <doi:10.1016/j.scitotenv.2020.140923>.