A stealth, engineering-driven startup with significant funding is looking for a Senior Web Crawling Engineer to design and own the core crawling system that powers their next-generation search platform.
The team builds deeply technical systems in Rust, though strong experience in C++ or another low-level language plus an interest in working with Rust is also welcome.
What You’ll Work On
- Architect and implement large-scale web crawling infrastructure
- Manage crawling pipelines: scheduling, graph traversal, deduplication, and robustness
- Integrate crawled data with downstream indexing and storage systems
- Contribute to high-performance Rust systems used across the product
Requirements
Must-have
- Experience building large-scale web crawlers (millions of URLs or comparable scale)
- Strong background in systems programming (Rust, C++, or similar)
- Ability to design, build, and maintain complex distributed systems
- Solid understanding of HTTP, robots.txt, sitemaps, rate limiting, and crawling best practices
Nice-to-have
- Production Rust experience or a strong desire to work in Rust
- Familiarity with search, indexing, or storage systems
- Experience with low-latency or high-throughput distributed systems
- Startup-oriented, hands-on engineering mindset
About the Company
- Well-funded stealth startup (8-figure funding)
- Building a new search engine layer for AI and autonomous agents
- Small, highly technical engineering team
- Fully remote, with preferred overlap across European / Eastern / Pacific time zones
- Flexible arrangements (contract or EOR available)
How to Apply
Send your resume and a brief summary of previous crawling or systems work to webcrawling@rustjobs.dev