Ebenezer Tarubinga

Hello! I am a ML/CV Engineer. I did my MSc. in AI at Korea University (2023-25), advised by Dr. Seong-Whan Lee.

I have 4+ years of experience in computer vision, deep learning, robotics and software engineering. My most recent work has been deploying image & video segmentation models.
Aside from Semantic segmentation, I also work on Depth estimation, Instance retrival, Dense matching and Sparse matching. I also have research experience in Big Data and Reinforcement Learning.

If you want to discuss anything; please feel free to reach me :)

Email  /  CV  /  Google Scholar  /  LinkedIn  /  GitHub

profile photo
Publications/Projects
Selected / All
CW-BASS project image
CW-BASS: Confidence-Weighted Boundary Aware Learning for Semi-Supervised Semantic Segmentation
Ebenezer Tarubinga, Jenifer Kalafatovich, Seong-Whan Lee,
IJCNN 2025
Project Page / Code / Paper

Tackled boundary blur and confirmation bias using confidence-weighted and boundary-focused techniques to improve segmentation performance.

FARCLUSS project image
FARCLUSS: Fuzzy Adaptive Rebalancing and Contrastive Uncertainty Learning for Semi-Supervised Semantic Segmentation
Ebenezer Tarubinga, Jenifer Kalafatovich, Seong-Whan Lee,
Neural Networks (Elsevier) – Under Review
Code / Paper

Introduced fuzzy labels and lightweight contrastive learning to improve generalization in semi-supervised settings semantic segmentation.

Computer Vision & Adversarial ML
Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning.

Project on designing stealthy backdoor triggers in CLIP-style models by aligning visual and textual embeddings (Liang et al., CVPR 2024).

Semantic-Aware Multi-Label Adversarial Attacks.

Project on crafting targeted perturbations for multi-label classifiers that respect semantic label dependencies (Mahmood et al., CVPR 2024).

Self-Training for Semi-Supervised Semantic Segmentation.

Exploration of ST++ techniques—strong data augmentations and selective re-training—to boost segmentation with limited labels (Yang et al., CVPR 2022).

Scalable Urban Dynamic Scenes.

Advanced scene representation using multi-branch hash tables and NeRF-style encoding for large-scale dynamic urban reconstructions (Turki et al., CVPR 2023).

Speech Emotion Recognition.

Implementation of a Bi-GRU with self-attention framework for classifying emotions from audio/text (Wu et al., ICASSP 2023).

Autoregressive text-to-image generation.

AutoGAN (Parti) (Google Research, TMLR 2022)

Controllable text-to-image generation.

Control GAN (B. Li et. al, NIPS 2019).

Diffusion based text-to-image generation.

Imagen (Google Research, NIPS 2022).

Reinforcement Learning
Aligning Segment Anything Model to Open Context via Reinforcement Learning.

Study on automatically prompting SAM with an RL agent for diverse segmentation tasks (Huang et al., CVPR 2024).

Tool-Augmented Reward Modeling.

Integrating external APIs into reward models for more accurate, transparent decision processes (Li et al., ICLR 2024).

Medical & Healthcare Imaging
ASD Classification with Multi-Site fMRI Data.

Development of a second-order functional connectivity embedding plus domain-adaptation to improve autism detection across diverse sites (Kunda et al., IEEE TMI 2022).

Mutual Correction Framework for Semi-Supervised Medical Image Segmentation.

Reviewing and improving the Mutual Correction Framework (MCF) for refining medical image segmentation masks via mutual corrections solving confirmation bias (Wang et al., CVPR 2023).

Large-Scale Language Models
Why Does the Effective Context Length of LLMs Fall Short?.

Investigated the gap between declared training context length and empirical “effective” context length, analyzed positional-encoding biases and shifting method (Chenxin, ICLR 2025).

Work Experience
Korea University logo

Machine Learning Research Engineer
Pattern Recognition & Machine Learning Lab | Seoul, Korea
Aug 2023 – Aug 2025

  • Built and deployed novel segmentation models, outperforming baselines by up to 25% mIoU.
  • Collaborated with external industry partners on R&D initiatives, publishing papers and filing a patent.
  • Developed object detection and tracking pipelines for autonomous driving applications.
  • Led research in depth estimation, instance retrieval, and dense/sparse matching.
Upwork logo

Machine Learning Engineer (Freelance)
Upwork | Remote
Aug 2022 – Aug 2023

  • Designed and implemented end-to-end ML pipelines for various classification tasks.
  • Created open-source computer vision projects and contributed to the community.
  • Improved performance through hyperparameter tuning and experimental analysis.
  • Integrated ML models into production systems in collaboration with cross-functional teams.
GliTech logo

CTO, Software & AI Engineer
GliT | Hybrid
Aug 2022 – Aug 2023

  • Led tech strategy and completed 10+ full-cycle projects successfully.
  • Built and deployed AI-powered applications using deep learning frameworks.
  • Launched Innovation Hub clubs in schools, engaging 250+ students and raising STEM participation by 35%.
  • Secured 10+ school partnerships, increasing tech adoption by 40% through community-focused solutions.
Education
Korea University logo

Korea University
Master of Science, Artificial Intelligence
2023 - 2025

Core Skills
  • Vision Tasks: Scene classification, object detection/tracking, semantic segmentation, depth estimation
  • Frameworks & Tools: PyTorch, TensorFlow, OpenCV, CUDA, ONNX, Docker, Blender, Unity
  • Languages: Python, C++, C#, Bash, Java
  • Others: MLflow, Agile development, CI/CD, NVIDIA Jetson, academic writing, stakeholders
Certificates
IBM
IBM Applied AI Professional Certificate - IBM
Northwestern
Modern Robotics Specialization - Northwestern University
Google
Foundations of Project Management - Google
Amazon
Semantic Segmentation with Amazon Sagemaker - Amazon
AWS
AWS S3 Basics - Amazon Web Services
Microsoft
Machine Learning Pipelines with Azure ML Studio - Microsoft
Emory University
Neuroscience - Emory University
MIT
Game Development using Scratch - MIT

Template based on Jon Barron's website.