Enter the query into the form above. You can look for specific version of a package by using @ symbol like this: gcc@10.
API method:
GET /api/packages?search=hello&page=1&limit=20
where search is your query, page is a page number and limit is a number of items on a single page. Pagination information (such as a number of pages and etc) is returned
in response headers.
If you'd like to join our channel webring send a patch to ~whereiseveryone/toys@lists.sr.ht adding your channel as an entry in channels.scm.
Splines are efficiently represented through their Taylor expansion at the knots. The representation accounts for the support sets and is thus suitable for sparse functional data. Two cases of boundary conditions are considered: zero-boundary or periodic-boundary for all derivatives except the last. The periodical splines are represented graphically using polar coordinates. The B-splines and orthogonal bases of splines that reside on small total support are implemented. The orthogonal bases are referred to as splinets and are utilized for functional data analysis. Random spline generator is implemented as well as all fundamental algebraic and calculus operations on splines. The optimal, in the least square sense, functional fit by splinets to data consisting of sampled values of functions as well as splines build over another set of knots is obtained and used for functional data analysis. The S4-version of the object oriented R is used. <doi:10.48550/arXiv.2102.00733>, <doi:10.1016/j.cam.2022.114444>, <doi:10.48550/arXiv.2302.07552>.
Identifies a bicluster, a submatrix of the data such that the features and observations within the submatrix differ from those not contained in submatrix, using a two-step method. In the first step, observations in the bicluster are identified to maximize the sum of weighted between cluster feature differences. The method is described in Helgeson et al. (2020) <doi:10.1111/biom.13136>. SCBiclust can be used to identify biclusters which differ based on feature means, feature variances, or more general differences.
Calculates the slope (longitudinal gradient or steepness) of linear geographic features such as roads (for more details, see Ariza-López et al. (2019) <doi:10.1038/s41597-019-0147-x>) and rivers (for more details, see Cohen et al. (2018) <doi:10.1016/j.jhydrol.2018.06.066>). It can use local Digital Elevation Model (DEM) data or download DEM data via the ceramic package. The package also provides functions to add elevation data to linestrings and visualize elevation profiles.
Makes the React library Chakra UI usable in Shiny apps. Chakra UI components include alert dialogs, drawers (sliding panels), menus, modals, popovers, sliders, and more.
Fitting dimension reduction methods to data lying on two-dimensional sphere. This package provides principal geodesic analysis, principal circle, principal curves proposed by Hauberg, and spherical principal curves. Moreover, it offers the method of locally defined principal geodesics which is underway. The detailed procedures are described in Lee, J., Kim, J.-H. and Oh, H.-S. (2021) <doi:10.1109/TPAMI.2020.3025327>. Also see Kim, J.-H., Lee, J. and Oh, H.-S. (2020) <arXiv:2003.02578>.
Compiles and displays the available data sets regarding the Italian school system, with a focus on the infrastructural aspects. Input datasets are downloaded from the web, with the aim of updating everything to real time. The functions are divided in four main modules, namely Get', to scrape raw data from the web Util', various utilities needed to process raw data Group', to aggregate data at the municipality or province level Map', to visualize the output datasets.
Quickly and flexibly calculates weights for survey data, in order to correct for survey non-response or other sampling issues. Uses rake weighting, a common technique also know as rim weighting or iterative proportional fitting. This technique allows for weighting on multiple variables, even when the interlocked distribution of the two variables is not known. Interacts with Thomas Lumley's survey package, as described in Lumley, Thomas (2011, ISBN:978-1-118-21093-2). Adds additional functionality, more adaptable syntax, and error-checking to the base weighting functionality in survey.'.
Metapackage for implementing a variety of event-based models, with a focus on spatially explicit models. These include raster-based, event-based, and agent-based models. The core simulation components (provided by SpaDES.core') are built upon a discrete event simulation (DES; see Matloff (2011) ch 7.8.3 <https://nostarch.com/artofr.htm>) framework that facilitates modularity, and easily enables the user to include additional functionality by running user-built simulation modules (see also SpaDES.tools'). Included are numerous tools to visualize rasters and other maps (via quickPlot'), and caching methods for reproducible simulations (via reproducible'). Tools for running simulation experiments are provided by SpaDES.experiment'. Additional functionality is provided by the SpaDES.addins and SpaDES.shiny packages.
Extends the classical SSIM method proposed by Wang', Bovik', Sheikh', and Simoncelli'(2004) <doi:10.1109/TIP.2003.819861>. for irregular lattice-based maps and raster images. The geographical SSIM method incorporates well-developed geographically weighted summary statistics'('Brunsdon', Fotheringham and Charlton 2002) <doi:10.1016/S0198-9715(01)00009-6> with an adaptive bandwidth kernel function for irregular lattice-based maps.
Compute the frequency distribution of a search term in a series of texts. For example, Arthur Conan Doyle wrote a total of 60 Sherlock Holmes stories, comprised of 54 short stories and 4 longer novels. I wanted to test my own subjective impression that, in many of the stories, Sherlock Holmes popularity was used as bait to induce the reader to read a story that is essentially not primarily a Sherlock Holmes story. I used the term "Holmes" as a search pattern, since Watson would frequently address him by name, or use his name to describe something that he was doing. My hypothesis is that the frequency distribution of the search pattern "Holmes" is a good proxy for the degree to which a story is or is not truly a Sherlock Holmes story. The results are presented in a manuscript that is available as a vignette and online at <https://barryzee.github.io/Concordance/index.html>.
Landsat satellites collect important data about global forest conditions. Documentation about Landsat's role in forest disturbance estimation is available at the site <https://landsat.gsfc.nasa.gov/>. By constrained quadratic B-splines, this package delivers an optimal shape-restricted trajectory to a time series of Landsat imagery for the purpose of modeling annual forest disturbance dynamics to behave in an ecologically sensible manner assuming one of seven possible "shapes", namely, flat, decreasing, one-jump (decreasing, jump up, decreasing), inverted vee (increasing then decreasing), vee (decreasing then increasing), linear increasing, and double-jump (decreasing, jump up, decreasing, jump up, decreasing). The main routine selects the best shape according to the minimum Bayes information criterion (BIC) or the cone information criterion (CIC), which is defined as the log of the estimated predictive squared error. The package also provides parameters summarizing the temporal pattern including year(s) of inflection, magnitude of change, pre- and post-inflection rates of growth or recovery. In addition, it contains routines for converting a flat map of disturbance agents to time-series disturbance maps and a graphical routine displaying the fitted trajectory of Landsat imagery.
An advanced version of package s2dverification'. Intended for seasonal to decadal (s2d) climate forecast verification, but also applicable to other types of forecasts or general climate analysis. This package is specifically designed for comparing experimental and observational datasets. It provides functionality for data retrieval, post-processing, skill score computation against observations, and visualization. Compared to s2dverification', s2dv is more compatible with the package startR', able to use multiple cores for computation and handle multi-dimensional arrays with a higher flexibility. The Climate Data Operators (CDO) version used in development is 1.9.8. Implements methods described in Wilks (2011) <doi:10.1016/B978-0-12-385022-5.00008-7>, DelSole and Tippett (2016) <doi:10.1175/MWR-D-15-0218.1>, Kharin et al. (2012) <doi:10.1029/2012GL052647>, Doblas-Reyes et al. (2003) <doi:10.1007/s00382-003-0350-4>.
Fast, lightweight toolkit for data splitting. Data sets can be partitioned into disjoint groups (e.g. into training, validation, and test) or into (repeated) k-folds for subsequent cross-validation. Besides basic splits, the package supports stratified, grouped as well as blocked splitting. Furthermore, cross-validation folds for time series data can be created. See e.g. Hastie et al. (2001) <doi:10.1007/978-0-387-84858-7> for the basic background on data partitioning and cross-validation.
Data sets utilized by the SGP package as exemplars for users to conduct their own student growth percentiles (SGP) analyses.
This package provides a system enables cross study Analysis by extracting and filtering study data for control animals from CDISC SEND Study Repository. These data types are supported: Body Weights, Laboratory test results and Microscopic findings. These database types are supported: SQLite and Oracle'.
Fitting a smooth path to a given set of noisy spherical data observed at known time points. It implements a piecewise geodesic curve fitting method on the unit sphere based on a velocity-based penalization scheme. The proposed approach is implemented using the Riemannian block coordinate descent algorithm. To understand the method and algorithm, one can refer to Bak, K. Y., Shin, J. K., & Koo, J. Y. (2023) <doi:10.1080/02664763.2022.2054962> for the case of order 1. Additionally, this package includes various functions necessary for handling spherical data.
Exporting shiny applications with shinylive allows you to run them entirely in a web browser, without the need for a separate R server. The traditional way of deploying shiny applications involves in a separate server and client: the server runs R and shiny', and clients connect via the web browser. When an application is deployed with shinylive', R and shiny run in the web browser (via webR'): the browser is effectively both the client and server for the application. This allows for your shiny application exported by shinylive to be hosted by a static web server.
Encrypt text using a simple shifting substitution cipher with setcode(), providing two numeric keys used to define the encryption algorithm. The resulting text can be decoded using decode() function and the two numeric keys specified during encryption.
This package provides a statistical learning method to simultaneously predict a range of target phenotypes using codified and natural language processing (NLP)-derived Electronic Health Record (EHR) data. See Ahuja et al (2020) JAMIA <doi:10.1093/jamia/ocaa079> for details.
This package provides a collection of sparse and regularized discriminant analysis methods intended for small-sample, high-dimensional data sets. The package features the High-Dimensional Regularized Discriminant Analysis classifier from Ramey et al. (2017) <arXiv:1602.01182>. Other classifiers include those from Dudoit et al. (2002) <doi:10.1198/016214502753479248>, Pang et al. (2009) <doi:10.1111/j.1541-0420.2009.01200.x>, and Tong et al. (2012) <doi:10.1093/bioinformatics/btr690>.
This package provides functions for analysis of network objects, which are imported or simulated by the package. The non-parametric methods of analysis center on snowball and bootstrap sampling for estimating functions of network degree distribution. For other parameters of interest, see, e.g., bootnet package.
This package provides a shiny interface for a simpler use of the sbm R package. It also contains useful functions to easily explore the sbm package results. With this package you should be able to use the stochastic block model without any knowledge in R, get automatic reports and nice visuals, as well as learning the basic functions of sbm'.
Annotates single-cell and spatial-transcriptomic (ST) data using context-matching marker datasets. It creates a unified marker list (`Markers_list`) from multiple sources: built-in curated databases ('Cellmarker2', PanglaoDB', scIBD', TCellSI', PCTIT', PCTAM'), Seurat objects with cell labels, or user-provided Excel tables. SlimR first uses adaptive machine learning for parameter optimization, and then offers two automated annotation approaches: cluster-based and per-cell'. Cluster-based annotation assigns one label per cluster, expression-based probability calculation, and AUC validation. Per-cell annotation assigns labels to individual cells using three scoring methods with adaptive thresholds and ratio-based confidence filtering, plus optional UMAP spatial smoothing, making it ideal for heterogeneous clusters and rare cell types. The package also supports semi-automated workflows with heatmaps, feature plots, and combined visualizations for manual annotation. For more details, see Kabacoff (2020, ISBN:9787115420572).
Import data from the STATcube REST API or from the open data portal of Statistics Austria. This package includes a client for API requests as well as parsing utilities for data which originates from STATcube'. Documentation about STATcubeR is provided by several vignettes included in the package as well as on the public pkgdown page at <https://statistikat.github.io/STATcubeR/>.