Yahoo Web Search

Search results

  1. Jan 14, 2021 · Bryan Truong - Microsoft | LinkedIn. Washington, District of Columbia, United States. 930 followers 500+ connections. View mutual connections with Bryan. Welcome back. Microsoft. Georgia...

    • 500+
    • 930
    • Microsoft
    • Atlanta, Georgia, United States
  2. An Los Angeles Based Actor & Comedian. Bryan Truong aka Big Trouble, Is a Stand-up comedian, improviser and a very good actor. With 15 years of experience in comedy, Bryan has studied and performed at many notable comedy venues in LA including: Flappers Comedy, Laugh Factory, The Improv, The Icehouse, The Comedy Store, UCB, IO West, Second City ...

  3. Bryan Truong - Linear Microsystems | LinkedIn. San Jose, California, United States. 1K followers 500+ connections. View mutual connections with Bryan. Welcome back. Linear Microsystems. Kansas...

    • 500+
    • 1.2K
    • Linear Microsystems
    • San Jose, California, United States
    • The Dataset
    • Getting Started
    • Quantifying Bias
    • Training The Model
    • Evaluating The Model
    • Mitigating Bias with Ai Fairness 360
    • Closing Thoughts

    When choosing a dataset, I thought it was important to select a dataset that clearly involves legally protected groups/classes, whose outcome variable was binary, with a clearly defined favorable outcome and unfavorable outcome. I selected a relatively small dataset from Kaggle, which contains information about loan applicants, as well as whether t...

    First, I imported the dataset with Pandas and checked for missing values. Unfortunately, there are missing values (see above) in the dataset, so I simply removed rows with null values:

    How do we quantify bias? There are a number of different methods, but I used a metric known as the “Disparate Impact Ratio”: In plain English, the disparate impact ratio is the ratio of positive outcomes (Loan_Status=1) in the unprivileged group (in our case, females) divided by the ratio of positive outcomes in the privileged group (males). The AI...

    I used logistic regression, one of the simpler classification algorithms, as it is efficient in cases where the outcome variable to be predicted is binary. In this dataset, Loan_Status is indeed binary, with a value of0 for Loan_Status indicating a denied loan, and a value of 1indicating an approved loan. I used scikit-learn’s LogisticRegressionmod...

    Evaluating Performance

    Next, I sought to evaluate the model’s performance. I used scikit-learn’s metrics module to compute classification performance measures: Not bad for such a simple baseline model! Obviously, in the real world, I’d want to try out multiple other techniques, but the focus of this exercise is not on modeling, and I want to keep this article brief.

    Evaluating Bias in the Predicted Outcomes

    To quantify bias, I calculated the disparate impact ratio, just as before, except instead of using the actual outcomes in the testing data, I calculated the disparate income ratio of the predictedoutcomes produced by the model I had just trained. As shown above, I arrived at a disparate impact ratio of .66. This disparate impact ratio is worse than the one from the actual test split, which was .83– less biased than the .66 of the model we just trained. This is not a surprise, as it has been s...

    The Toolkit

    To mitigate bias, I utilized an open-source toolkit/Python package of metrics and algorithms introduced by IBM Research in 2018. Bias mitigation algorithms can be generally categorized into three categories: pre-process (which affects the data, prior to training), in-process (which affects the classifier itself), and post-process (which affects the prediction labels that are output). I chose to apply a pre-processing algorithm offered by the AIF360 package, DisparateImpactRemover, which edits...

    Applying the Pre-Processing

    This is where it gets tricky. Objectively, the AIF360 docs aren’t the best, but going up the inheritance chain, I saw that AIF360 requires users to convert the Pandas dataframe into a data type they call a BinaryLabelDataSet (see docs here) before applying the disparate impact removal algorithm: I then created a DisparateImpactRemover object, which is used to run a repairer on the non-protected features of the dataset. After running the repairer, I then converted the BinaryLabelDataset that w...

    The goal of this exercise was to begin to explore bias– to see how bias can very easily get amplified in ML models, as well as potential approaches to mitigate bias. Before training the model, I already observed bias in the original dataset’s testing values (with a disparate income ratio of .83). When I trained a model and evaluated its predictive ...

  4. People also ask

  5. Explore the filmography of Bryan Truong on Rotten Tomatoes! Discover ratings, reviews, and more. Click for details!

  6. www.imdb.com › name › nm5955628Bryan Truong - IMDb

    8 Photos. Bryan Truong is known for The Card Counter (2021), Love Arcadia (2015) and My Boys (2006). More at IMDbPro. Contact info. Agent info. Resume. Add to list. Photos 8. Known for: The Card Counter. 6.2. Minnesota. 2021. Love Arcadia. 7.0. Harry Lam. 2015. My Boys. 7.3. TV Series. Actor. 2008 • 1 ep. I'm Super, Man. 7.7. Short. Robber. 2013.

  1. People also search for