From 619f6c91f9c9b126546a3fb0ee3ebe498ec9034d Mon Sep 17 00:00:00 2001 From: Ahmad Alsaleh Date: Mon, 17 Jun 2024 09:49:29 +0400 Subject: [PATCH] docs: update `README.Rmd` --- README.Rmd | 36 ++++++++++++++---------------------- README.md | 29 +++++++++++++++++++++-------- 2 files changed, 35 insertions(+), 30 deletions(-) diff --git a/README.Rmd b/README.Rmd index 13a6eac..d988a38 100644 --- a/README.Rmd +++ b/README.Rmd @@ -22,37 +22,29 @@ knitr::opts_chunk$set( [![Codecov test coverage](https://codecov.io/gh/Ahmad-Alsaleh/EvaluateFeatureSelection/branch/main/graph/badge.svg)](https://app.codecov.io/gh/Ahmad-Alsaleh/EvaluateFeatureSelection?branch=main) -The goal of EvaluateFeatureSelection is to ... +Generates plots to visualize and assess the performance of feature selection methods using supervised learning. + It also provides functions to plot scree plots to visualize good cutting points for the number of features to be selected. ## Installation -You can install the development version of EvaluateFeatureSelection like so: +You can install the development version of EvaluateFeatureSelection like +so: -``` r -# FILL THIS IN! HOW CAN PEOPLE INSTALL YOUR DEV PACKAGE? +```r +install.packages("remotes") +remotes::install_github("Ahmad-Alsaleh/EvaluateFeatureSelection") ``` ## Example -This is a basic example which shows you how to solve a common problem: - -```{r example} +Generate a scree plot +```r library(EvaluateFeatureSelection) -## basic example code -``` - -What is special about using `README.Rmd` instead of just `README.md`? You can include R chunks like so: - -```{r cars} -summary(cars) +features_scores <- c(x1 = 0.8165005, x2 = -0.1178857, ...) +get_scree_plot(features_scores) ``` +![BAM Scores Scree Plot](https://github.com/Ahmad-Alsaleh/EvaluateFeatureSelection/assets/61240880/46da58ea-c7d0-4247-8d8b-af6758d2ff18) -You'll still need to render `README.Rmd` regularly, to keep `README.md` up-to-date. `devtools::build_readme()` is handy for this. - -You can also embed plots, for example: - -```{r pressure, echo = FALSE} -plot(pressure) -``` +Similarly, you can use `get_auc_plot(...)` or `get_acc_plot(...)` to evaluate the performance of feature selection methods using supervised learning and AUC/accuracy as the performance metric. -In that case, don't forget to commit and push the resulting figure files, so they display on GitHub and CRAN. +![image](https://github.com/Ahmad-Alsaleh/EvaluateFeatureSelection/assets/61240880/5684b533-ae91-491e-8584-9f356a909a20) diff --git a/README.md b/README.md index db0db46..c40b099 100644 --- a/README.md +++ b/README.md @@ -14,8 +14,10 @@ status](https://www.r-pkg.org/badges/version/EvaluateFeatureSelection)](https:// coverage](https://codecov.io/gh/Ahmad-Alsaleh/EvaluateFeatureSelection/branch/main/graph/badge.svg)](https://app.codecov.io/gh/Ahmad-Alsaleh/EvaluateFeatureSelection?branch=main) -This package generates plots to visualize and assess the performance of feature selection methods using supervised learning. - It also provides functions to plot scree plots to visualize good cutting points for the number of features to be selected. +Generates plots to visualize and assess the performance of feature +selection methods using supervised learning. It also provides functions +to plot scree plots to visualize good cutting points for the number of +features to be selected. ## Installation @@ -36,10 +38,21 @@ library(EvaluateFeatureSelection) features_scores <- c(x1 = 0.8165005, x2 = -0.1178857, ...) get_scree_plot(features_scores) ``` -![BAM Scores Scree Plot](https://github.com/Ahmad-Alsaleh/EvaluateFeatureSelection/assets/61240880/46da58ea-c7d0-4247-8d8b-af6758d2ff18) - - -Similarly, you can use `get_auc_plot(...)` to evaluate the performance of feature selection methods using supervised learning. - -![image](https://github.com/Ahmad-Alsaleh/EvaluateFeatureSelection/assets/61240880/5684b533-ae91-491e-8584-9f356a909a20) +
+ + +
+ +Similarly, you can use `get_auc_plot(...)` or `get_acc_plot(...)` to +evaluate the performance of feature selection methods using supervised +learning and AUC/accuracy as the performance metric. + +
+ + +