Ebenezer Tarubinga

Hello! I am a ML/CV Engineer. I did my MSc. in AI at Korea University (2023-25), advised by Dr. Seong-Whan Lee.

I have 4+ years of experience in computer vision, deep learning, robotics and software engineering. My most recent work has been deploying image & video segmentation models.
Aside from Semantic segmentation, I also work on Depth estimation, Instance retrival, Dense matching and Sparse matching. I also have research experience in Big Data and Reinforcement Learning.

If you want to discuss anything; please feel free to reach me :)

Email  /  CV  /  Google Scholar  /  LinkedIn  /  GitHub

profile photo
Publications/Projects
Selected / All
CW-BASS project image
CW-BASS: Confidence-Weighted Boundary Aware Learning for Semi-Supervised Semantic Segmentation
Ebenezer Tarubinga, Jenifer Kalafatovich, Seong-Whan Lee,
IJCNN 2025
Project Page / Code / Paper

Tackled boundary blur and confirmation bias using confidence-weighted and boundary-focused techniques to improve segmentation performance.

FARCLUSS project image
FARCLUSS: Fuzzy Adaptive Rebalancing and Contrastive Uncertainty Learning for Semi-Supervised Semantic Segmentation
Ebenezer Tarubinga, Jenifer Kalafatovich, Seong-Whan Lee,
Neural Networks (Elsevier) – Under Review
Code / Paper

Introduced fuzzy labels and lightweight contrastive learning to improve generalization in semi-supervised settings semantic segmentation.

Computer Vision & Adversarial ML
Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning.

Project on designing stealthy backdoor triggers in CLIP-style models by aligning visual and textual embeddings (Liang et al., CVPR 2024).

Semantic-Aware Multi-Label Adversarial Attacks.

Project on crafting targeted perturbations for multi-label classifiers that respect semantic label dependencies (Mahmood et al., CVPR 2024).

Self-Training for Semi-Supervised Semantic Segmentation.

Exploration of ST++ techniques—strong data augmentations and selective re-training—to boost segmentation with limited labels (Yang et al., CVPR 2022).

Scalable Urban Dynamic Scenes.

Advanced scene representation using multi-branch hash tables and NeRF-style encoding for large-scale dynamic urban reconstructions (Turki et al., CVPR 2023).

Speech Emotion Recognition.

Implementation of a Bi-GRU with self-attention framework for classifying emotions from audio/text (Wu et al., ICASSP 2023).

Autoregressive text-to-image generation.

AutoGAN (Parti) (Google Research, TMLR 2022)

Controllable text-to-image generation.

Control GAN (B. Li et. al, NIPS 2019).

Diffusion based text-to-image generation.

Imagen (Google Research, NIPS 2022).

Reinforcement Learning
Aligning Segment Anything Model to Open Context via Reinforcement Learning.

Study on automatically prompting SAM with an RL agent for diverse segmentation tasks (Huang et al., CVPR 2024).

Tool-Augmented Reward Modeling.

Integrating external APIs into reward models for more accurate, transparent decision processes (Li et al., ICLR 2024).

Medical & Healthcare Imaging
ASD Classification with Multi-Site fMRI Data.

Development of a second-order functional connectivity embedding plus domain-adaptation to improve autism detection across diverse sites (Kunda et al., IEEE TMI 2022).

Mutual Correction Framework for Semi-Supervised Medical Image Segmentation.

Reviewing and improving the Mutual Correction Framework (MCF) for refining medical image segmentation masks via mutual corrections solving confirmation bias (Wang et al., CVPR 2023).

Large-Scale Language Models
Why Does the Effective Context Length of LLMs Fall Short?.

Investigated the gap between declared training context length and empirical “effective” context length, analyzed positional-encoding biases and shifting method (Chenxin, ICLR 2025).

Work Experience
Korea University logo

Machine Learning Research Engineer
Pattern Recognition & Machine Learning Lab | Seoul, Korea
Aug 2023 – July 2025

  • Designed and trained novel semantic segmentation models (CW-BASS, FARCLUSS), achieving performance gains of up to 25% over baselines.
  • Contributed to object detection & tracking and video classification pipelines using PyTorch, targeting autonomous driving and video understanding.
GliTech logo

Software & AI Engineer Lead & Project Manager
GliTech (Global Leaders in African Tech) | Bulawayo, Zimbabwe
Jan 2019 - Jan 2021

  • Led the software team, successfully completing 10+ full-cycle projects on time and budget & boosting client satisfaction.
  • Set up Innovation Hub clubs across various schools, promoting tech innovation and teamwork among over 250 students, increasing STEM field engagement by 35%.
  • Implemented community tech solutions via stakeholder engagement, securing 10+ partnerships with schools and boosting technology adoption by 40%.
Education
Korea University logo

Korea University
Master of Science, Artificial Intelligence
Sep 2023 - Aug 2025

Core Skills
  • Vision Tasks: Scene classification, object detection/tracking, semantic segmentation, depth estimation
  • Frameworks & Tools: PyTorch, TensorFlow, OpenCV, CUDA, ONNX, Docker, Blender, Unity
  • Languages: Python, C++, C#, Bash, Java
  • Others: MLflow, Agile development, CI/CD, NVIDIA Jetson, academic writing, stakeholders
Certificates
IBM
IBM Applied AI Professional Certificate - IBM
Northwestern
Modern Robotics Specialization - Northwestern University
Google
Foundations of Project Management - Google
Amazon
Semantic Segmentation with Amazon Sagemaker - Amazon
AWS
AWS S3 Basics - Amazon Web Services
Microsoft
Machine Learning Pipelines with Azure ML Studio - Microsoft
Emory University
Neuroscience - Emory University
MIT
Game Development using Scratch - MIT

Template based on Jon Barron's website.