From Local Markets to Global Business
My motivation for AI security was primarily doing hard and exciting work and curiosity about theoretical foundations of AI. In 2011, my mom lost nearly $200,000 in an online fraud and didn’t even recognize that until I found it. It triggered me to think about the need for security. While most stakeholders and movers rather focus on benefits of technology, a few others, with a special focus on issues involving people who are unfamiliar with new technologies, like my mother, as this population is more susceptible to becoming victims of cyber-crime, aspire to solve the real-world security problems that technology brings. I aspire to be one of them. For example, making AI service trustworthy and preventing AI-enabled fraud, like DeepFake.
I am working on an AI safety research project, making AI forget users’ data. My goal is to revoke someone's information efficiently on a standard trained NN without losing model performance, not merely training data from scratch. At the start, I wanted to find a method to solve the problem directly. But it didn’t work well for research. I tried many ways and found I didn’t make much progress for my final goal. So, I defined two types of the deletion: the first, deletion of the class, was more achievable and less specific, while the second was the deletion of the particular sample, which is hard and specific. I reached the first deletion by deleting the gradients of one class sample from weights with respect to that class. And I found breaking down a problem into single and manageable sub-objectives made solving the final overarching goal easier.
“You are doing a Ph.D. research project,” Dr. Zhang in my lab told me.
The deletion on neural networks is not easy like deleting data from datasets. First, the neural networks store only weights by training datasets. we need to revoke the effect of data on weights, not data itself. Besides, we need to find ways to locate the effect of data inside neural networks. It is challenging because the neural networks are not completely interpretable yet. I believe I was doing real-world problem and it will help protect the freedom and data privacy of people against misuse of AI.
I faced several deadlines and tasks from last October. I took a project, an AI risk assessment platform, since my colleague had trouble conducting practical solutions from algorithms. I implemented APIs of four adversarial attack algorithms and let APIs interact with all models. The work was challenging because I was not familiar with algorithms and it needed to be done before November. I first standardized data structure of these algorithms, made algorithms support all models from easy to harder. Then I spend most time separating the training part of algorithms into one module to let them interact with the same model. Because every algorithm used different pre-trained models, environments or recall different middle parameters in training process. From that I got some ways about handling tasks beyond me, clarifying manageable objectives towards a hard problem, knowing what I was capable and not getting lost in what I don’t know but not crucial.
My current plan is to finish my paper by April. I am going to apply my deletion method to Face Dataset, for that environment is hands-on under EU’s GDPR, then discuss the achievement, why deletion of samples didn’t work and possible future work. I believe the key to further deletion is to identify common and special features about deleted samples in neural networks, which are related to interpretability. I hope to further explore this work in Georgia Tech and would like to find an industry internship during my study. My goal is to access a significant project from zero, continuing to accumulate my project experience and enhance my understanding.
My long-term career objective is to become a security researcher and develop general and economic solutions to secure systems for the general public against cyber-crime in terms of detection, prevention and defeat. And I hope to consider factors outside technology, including how humans act and how policy work, with the emphasis on AI systems as emerging AI services and possible AI crimes over the coming years. Few companies, especially small companies and startups, can find balance between enabling and securing their business and users’ data, because they cannot afford secure solutions. I saw many enterprises in medicine, education and delivery used insecure AI service, resulting in property loss or endangering personal safety.
The first step towards my goal is to possess the cutting-edge security technologies and theoretical foundations of AI and systems. The second step is to utilize my expertise to solve real-world problems from AI services to board systems. The final step is to find general and security processes, technologies or architecture of systems. I choose Georgia Tech MSCS because the academic and industrial resources provided closely match my goal. And I wish to better it in Georgia Tech.
First, I am interested in both technological and policy aspects of solutions for business services. Georgia Tech CS curriculum provides grounding and capstone courses and combines security, policy and economic perspectives. Second, I will be well advised and engaged fully in my goal here as professors in areas of security and IISP cover a broad range, including AI for security, Interpreting AI, human-centered AI. I aspire to work on making AI interpretable, trustworthy for widespread use.
The Georgia Tech MSCS allows me to customize my degree to complete my first step. I will build solid foundations through core courses of systems. The advanced and practical courses, such as Information Security Strategies and Policies, give me an insight into limitations of AI system and how to deploy them, as well as ideas that cutting-edge security theories where industry is headed. With that prior background, I also aspire to work on a project under the supervision and join the lab in my first year. The next step is to become a security research engineer in an industry research lab, like [University Name]-IBM Watson AI Lab or Google Brain, implementing projects from exploratory thinking to the development of a practical solution.