We had an opportunity to help out a group of engineering students working on a senior project. Their goal was to deep dive on mountain bike kinematics and learn about proper suspension setup. They gave us a call and wanted to inquire about how we went about designing our data acquisition system. After about 15 questions about electrical engineering, we recommended they not go down the path we did to design an acquisition system. Frankly, it's a lot of grungy work that takes a long time to get through. I had no doubt they could push through and get a system that was able to get data, but by the time they did this, their semester would be over and they would have gotten no riding time in. We had a system laying around so we donated it to a good cause. Honestly, we're hoping they discover something cool and give us some ideas on how to make our product better.
What stood out to us was how methodical they were in their real world experiment. In short, they had a local trail they rode often. So they decided they would compare 3 suspension setups on the same bike, using the same rider. The setups were:
- Factory suspension recommendations
- Bike setup based on rider preferences
- Bike setup after using Motion Instruments data analysis
With the factory settings, their lap times were consistently in the 1:40 to 1:50 range. After setting the bike up using rider feeling, they were able to drop 10 seconds from the time. Then they tuned the bike using data and specifically focused on dynamic sag and compression/rebound balance. They claimed the setup they ended up with was something they would have never considered without being convinced to go there with data. In short, the lap times dropped to the 1:20 range. This was consistent and with little variance over 10 runs.
We pushed back a little and said "Hey, is there a chance the rider just got better through repetition, and not the suspension tune?" Their answer was that their test rider rode this trail for years and knew it well already. With a proper setup, he could just ride it faster.
Makes sense. Now the cool thing for us in this project is that after we shipped the system, all we heard was crickets and grasshoppers. They didn't ask us how to interpret the data or provide suggestions for tuning. They just consulted the data and correlated with rider perception. At the end of the day, we are convinced the way to a great setup is trail tuning sessions and data. You need both. Using rider feeling along with data to provide quantitative feedback, you will derive the best setup for you.
Here's a link to their senior project and a video with their professor who advised them.