Introduction to Pose Estimation

What is Pose Estimation?

Pose estimation is the process of determining a robot's position and orientation (heading) on the field. In FRC, pose estimation is essential for accurate autonomous navigation, precise scoring, and field awareness. A pose consists of two components: position (X and Y coordinates) and orientation (heading angle).

Without pose estimation, robots operate blindly - they don't know where they are on the field, making autonomous routines unreliable and precise movements impossible. With accurate pose estimation, robots can navigate to specific locations, follow paths accurately, and make intelligent decisions based on their field position.

Learn more: WPILib: Odometry and Pose Estimation

What is a Pose?

A pose represents a robot's complete spatial state on the field. It consists of:

Position (X, Y): The robot's location on the field, typically measured in meters from a reference point (origin). X represents forward/backward position, Y represents left/right position.

Orientation (Heading/Angle): The direction the robot is facing, typically measured in degrees or radians. In FRC, heading is usually measured from 0° (pointing in the positive X direction) with positive angles representing counterclockwise rotation.

WPILib represents poses using the Pose2d class, which combines a Translation2d (position) and a Rotation2d (orientation). This provides a complete description of the robot's state in 2D space.

Basic Pose Representation

package frc.robot.examples;

import edu.wpi.first.math.geometry.Pose2d;
import edu.wpi.first.math.geometry.Rotation2d;
import edu.wpi.first.math.geometry.Transform2d;
import edu.wpi.first.math.geometry.Translation2d;

public class PoseExample {
    public static void main(String[] args) {
        Pose2d robotPose = new Pose2d(
            new Translation2d(2.0, 3.0),  // X = 2.0m, Y = 3.0m
            Rotation2d.fromDegrees(45.0)  // Heading = 45 degrees
        );
        
        double x = robotPose.getX();  // 2.0
        double y = robotPose.getY();  // 3.0
        
        double headingDegrees = robotPose.getRotation().getDegrees();  // 45.0
        double headingRadians = robotPose.getRotation().getRadians();
        
        Pose2d originPose = new Pose2d(0.0, 0.0, Rotation2d.fromDegrees(0.0));
        
        // Transform poses (move and rotate)
        Pose2d transformedPose = originPose.transformBy(
            new Transform2d(1.0, 0.5, Rotation2d.fromDegrees(90.0))
        );
    }
}

Coordinate Systems

Understanding coordinate systems is crucial for pose estimation. FRC uses a standard field coordinate system:

Field Coordinate System: The field has a fixed origin (typically at one corner) with X and Y axes. The positive X direction is usually toward the opposing alliance station, and the positive Y direction is typically to the left when facing positive X. All poses are measured relative to this fixed field coordinate system.

Robot Coordinate System: The robot has its own coordinate system where the front of the robot is typically the positive X direction. Robot-centric coordinates are relative to the robot's current orientation, while field-centric coordinates are relative to the fixed field.

Origin and Axes: The field origin (0, 0) is a fixed reference point. All robot positions are measured from this origin. The coordinate system is right-handed: positive X is typically forward, positive Y is left, and positive rotation is counterclockwise.

Methods of Pose Estimation

There are several methods for estimating robot pose in FRC:

Odometry (Encoder-Based): Uses wheel encoders and a gyroscope to track robot movement. Odometry calculates position by integrating wheel movements over time (dead reckoning). It provides continuous updates but can accumulate error (drift) over time. Odometry is fast and doesn't require external references, making it ideal for continuous tracking.

Vision-Based (Camera-Based): Uses cameras to observe field features (AprilTags, field elements, game pieces) and calculate pose from these observations. Vision provides absolute positioning (no drift) but requires visibility of field features. Vision updates are typically slower than odometry and depend on camera field of view and lighting conditions.

Sensor Fusion: Combines odometry and vision (or multiple sensors) to get the best of both worlds. Fusion algorithms (like Kalman filters) weight different sensor sources based on their reliability and update rates. This provides continuous tracking with periodic absolute corrections, resulting in highly accurate pose estimates.

Common Applications in FRC

Pose estimation enables many advanced FRC capabilities:

Autonomous Navigation: Move to specific field positions accurately. Navigate around obstacles. Return to starting positions. Execute multi-segment autonomous routines.

Path Following: Follow complex trajectories (curves, splines). Maintain accurate position along paths. Execute autonomous paths with precision.

Precise Scoring: Align mechanisms with game elements. Position robot for optimal scoring angles. Navigate to scoring locations accurately.

Field Awareness: Make decisions based on robot position. Avoid obstacles and other robots. Optimize strategy based on field state. Track game piece locations relative to robot.

Simple Pose Tracking Example

package frc.robot.examples;

import edu.wpi.first.math.geometry.Pose2d;
import edu.wpi.first.math.geometry.Rotation2d;

public class SimplePoseTracking {
    private Pose2d m_currentPose;
    
    public SimplePoseTracking() {
        m_currentPose = new Pose2d(0.0, 0.0, Rotation2d.fromDegrees(0.0));
    }
    
    /**
     * Update robot pose (called every robot loop)
     * In real implementation, this would use odometry or vision updates
     */
    public void updatePose(Pose2d newPose) {
        m_currentPose = newPose;
    }
    
    /**
     * Get current robot pose
     */
    public Pose2d getPose() {
        return m_currentPose;
    }
    
    /**
     * Check if robot is at target position (within tolerance)
     */
    public boolean isAtPosition(Pose2d targetPose, double positionTolerance, double angleTolerance) {
        double distance = m_currentPose.getTranslation().getDistance(targetPose.getTranslation());
        double angleError = Math.abs(
            m_currentPose.getRotation().minus(targetPose.getRotation()).getDegrees()
        );
        
        return distance < positionTolerance && angleError < angleTolerance;
    }
    
    /**
     * Calculate distance to target position
     */
    public double getDistanceToTarget(Pose2d targetPose) {
        return m_currentPose.getTranslation().getDistance(targetPose.getTranslation());
    }
}

Key Concepts

Understanding these concepts is essential for pose estimation:

Pose (Position and Orientation): A complete description of robot state in 2D space. Position is (X, Y) coordinates, orientation is heading angle. Represented by WPILib's Pose2d class.

Coordinate Frames: Reference systems for measuring positions. Field-centric coordinates are relative to fixed field origin. Robot-centric coordinates are relative to robot's current orientation. Transformations convert between coordinate frames.

Localization: The process of determining where the robot is. Odometry provides continuous localization through dead reckoning. Vision provides absolute localization through field feature observation. Fusion combines multiple sources for best accuracy.

Field-Centric vs Robot-Centric: Field-centric coordinates are fixed to the field (used for navigation and path planning). Robot-centric coordinates are relative to robot orientation (used for local movements and sensor readings). Most autonomous routines use field-centric coordinates.

Resources

Open full interactive app