recruitment-HSAR(R&D)

2025-06-26

#
採用情報

Join Us in Shaping the Future of Human Behavior AI with VLMs!

Position: Vision-Language Model (VLMs) Researcher

– Human Behavior & Security AI At the forefront of real-world AI innovation, we develop cutting-edge action recognition technology for security cameras—already in use by industry leaders like TOKYU, SECOM, NTT, NIKON, CANON Marketing Japan, and Hankyu Hanshin Real Estate and many companies.. Now, we’re evolving. As we shift into the next generation of Vision-Language Models (VLMs), we’re searching for ambitious researchers who are eager to use massive video datasets and develop intelligent systems that understand contextual human behavior.

What Makes This Role Exciting?

・Hands-on Vision-Language Model Development with real-world, large-scale video data

・Multi-GPU Distributed Training at scale

・Collaborate with a diverse global team of AI engineers and researchers

・Dive deep into human behavior recognition, action detection, and AI ethics

What You’ll Need to Succeed

Must-Haves

・Master’s degree or higher in ML, NLP, CV, VLM, or Data Science fields

・3+ years of programming experience (school/research projects count!)

・Proficient in Python and familiar with deep learning frameworks (PyTorch preferred)

・Research-oriented mindset and love for solving tough problems

・Curiosity about human behavior and intelligent systems

・English reading & writing skills

Nice-to-Haves

・Experience fine-tuning LLMs (e.g., LoRA, CLIP)

・English speaking fluency

・Japanese language ability

If you are interested, please contact us here.

mail : hr@asilla.jp

No items found.

No items found.
VIEW MORE

CONTACT US

お問い合わせ
製品に関するお問い合わせ・ご相談はこちら