Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

projects

Generative Transformers for Diverse text Generation

Ishika Agarwal, Priyanka Kargupta, Bowen Jin, Akul Joshi

Diverse text generation is an important and challenging task. Existing methods mainly adopt a discriminative model, with the underlying assumption that the input text-to-output text projection is a one-one mapping. However, this is not true in the real world, since given one single input text, there can be multiple ground truth output text candidates. For example, in the commonsense generation, given a list of knowledge entities, there should be more than one way to use them to come up with a sentence. This motivates us to capture the underlying text semantics distribution with generative models (e.g., VAE and diffusion models). On the other hand, Transformer architecture has been demonstrated to be effective in text semantics capturing. Then the problem comes to how to effectively combine the Transformer architecture with the generative models. Our project aims to combine the best of both worlds by introducing VAE & Diffusion model into transformers. Specifically, we want to apply them to two downstream tasks: common sense generation and question generation. We include results, and some future work to further this project.

Download here

[Master’s Thesis] Active Graph Anomaly Detection

Ishika Agarwal, Hanghang Tong

Recently, detecting anomalies in attributed networks has gained a lot of attention from research communities due to the numerous real-world use cases in the financial, social media, medical, and agricultural domains. This thesis aims to explore node anomaly detection in two different aspects: soft-labeling, and multi-armed bandits. The environment in both settings is constrained to an active learning scenario where there is no direct access to ground truth labels but access to an oracle. This thesis comprises of three works: one using soft-labeling, another with multi-armed bandits, and a third that explores a combination of both. We present experimental results for each work to justify the algorithmic decisions that were made. Future work is also discussed to build on top of these methods.

Download here

publications

HiSaRL: A Hierarchical Framework for Safe Reinforcement Learning

Zikang Xiong, Ishika Agarwal, Suresh Jagannathan

Published in SafeAI @ AAAI, 2021

We propose a two-level hierarchical framework for safe reinforcement learning in a complex environment. The high-level part is an adaptive planner, which aims at learning and generating safe and efficient paths for tasks with imperfect map information. The lower-level part contains a learning-based controller and its corresponding neural Lyapunov function, which characterizes the controller’s stability property. This learned neural Lyapunov function serves two purposes. First, it will be part of the high-level heuristic for our planning algorithm. Second, it acts as a part of a runtime shield to guard the safety of the whole system. We use a robot navigation example to demonstrate that our framework can operate efficiently and safely in complex environments, even under adversarial attacks.

Download here

QuickAns: A Virtual TA

Ishika Agarwal, Shradha Sehgal, Varun Goyal, Prathamesh Sonawane

Published in AIML Systems, 2023

QuickAns is a virtual teaching assistant designed to help course staff who use Campuswire as their Q&A platform. It reads Campuswire posts from digest emails, and sends a potential answer to the course staff. At this stage, the course staff can review the answer for any logistical issues, and answer a student’s question in a matter of minutes.

Download here

Neural Active Learning Beyond Bandits

Yikun Ban, Ishika Agarwal, Ziwei Wu, Yada Zhu, Kommy Weldemariam, Hanghang Tong, Jingrui He

Published in ICLR, 2024

We study both stream-based and pool-based active learning with neural network approximations. A recent line of works proposed bandit-based approaches that transformed active learning into a bandit problem, achieving both theoretical and empirical success. However, the performance and computational costs of these methods may be susceptible to the number of classes, denoted as K, due to this transformation. Therefore, this paper seeks to answer the question: “How can we mitigate the adverse impacts of K while retaining the advantages of principled exploration and provable performance guarantees in active learning?” To tackle this challenge, we propose two algorithms based on the newly designed exploitation and exploration neural networks for stream-based and pool-based active learning. Subsequently, we provide theoretical performance guarantees for both algorithms in a non-parametric setting, demonstrating a slower error-growth rate concerning K for the proposed approaches. We use extensive experiments to evaluate the proposed algorithms, which consistently outperform state-of-the-art baselines.

Download here

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.