cv
Basics
Name | Farid Karimli |
Label | ML Researcher |
faridkar@bu.edu | |
Phone | 8572347396 |
Url | https://github.com/Farid-Karimli |
Work
- 2024.05 - now
ML Researcher
Gardos Lab @ BU CDS
Working on a multi-modal foundation model for analyzing millions of Herbaria samples and educational AI tutors. Advised by Dr. Thomas Gardos.
- Language Models
- Computer Vision
- AI Agents
- 2023.06 - now
MS Researcher
Image and Video Computing Group @ BU CS
I work on video-based human-computer interfaces and assistive technology. I lead the redesign of CameraMouse, a desktop application that enables people with severe motor impairments to use a computer with their head movement. I work on advanced face-tracking and gesture recognition capabilities to improve user interaction, developed a testing GUI for analyzing mouse movement efficiency, and created the testing routine that included both target selection and typical daily tasks. Advised by Dr. Margrit Betke.
- Human-computer interaction
- Facial recognition
- Assistive technology
- 2023.05 - 08
Engineering Intern
Software Application and Innovation Lab
Network graph visualization of the ASL lexicon; Data pipeline and dashboard for advertisement, marketing and internal analytics; Debugging Android API and OS issues.
- Linguistics research
- Data Visualization
- Android and desktop applications
- 2021.06 - 08
Data Engineer
Smart Solutions Group
Automation of scraping e-tender activity in the banking industry; Collection and analysis of social media posts; Using SQL to fetch data relevant to analysis of trends and fraudululent marketplace system activity.
- Social Media analytics
- Web Scraping
- Data Visualization
Education
Publications
-
preprint A Head-Based Mouse-Control System for People with Severe Disabilities
Farid Karimli, Hao Yu, Manny Akosah, and Margrit Betke, Wenxin Feng
We propose a complete AI-based redevelopment of the CameraMouse interface that includes automated facial feature detection and new ways to map facial feature positions to mouse coordinates and facial feature movements to mouse clicks. In addition to the traditional dwell-time mechanism, users can select items on the screen by opening their mouth or raising their eye brows.
Skills
Python | |
Data Frameworks | |
PyTorch | |
GUIs | |
Django |
Javascript/Typescript | |
React | |
NodeJS | |
Express |
Java | |
Spring Boot |
SQL | |
MySQL | |
Cloud SQL |
Languages
Azerbaijani | |
Native speaker |
Russian | |
Fluent |
English | |
Fluent |
Spanish | |
Intermediate |
Interests
Deep Learning | |
Computer Vision | |
Language Models | |
AI Agents |
Data Science | |
Data Visualization | |
Quantitative Analysis |
Software Development | |
Full Stack Development | |
Web Applications | |
Databases |
Projects
- 2024.05 - now
AirMouse
Control your mouse by finger pointing and hand gestures.
- Computer Vision
- Desktop Software
- Open Source
- 2024.02 - now
VideoASL
Using masked autoencoders and space-time attention to classify extended American Sign Language in video.
- Video classification
- Space-time Attention
- 2024.02 - now
FGK.ai
Attemping to build a chatbot that mimics my style of texting and writing. Building using Chainlit, LangChain and OpenAI.
- AI Agent
- AI Assistant