Simulating Aiming and Getting in Range
The following example comes from the PhotonLib example repository (Java/C++). Full code is available at those links.
Knowledge and Equipment Needed
Everything required in Combining Aiming and Getting in Range.
Background
The previous examples show how to run PhotonVision on a real robot, with a physical robot drivetrain moving around and interacting with the software.
This example builds upon that, adding support for simulating robot motion and incorporating that motion into a SimVisionSystem
. This allows you to test control algorithms on your development computer, without requiring access to a real robot.
Walkthrough
First, in the main Robot
source file, we add support to periodically update a new simulation-specific object. This logic only gets used while running in simulation:
118 DrivetrainSim dtSim;
119
120 @Override
121 public void simulationInit() {
122 dtSim = new DrivetrainSim();
123 }
124
125 @Override
126 public void simulationPeriodic() {
127 dtSim.update();
128 }
Then, we add in the implementation of our new DrivetrainSim class. Please reference the WPILib documentation on physics simulation.
Simulated Vision support is added with the following steps:
Creating the Simulated Vision System
First, we create a new SimVisionSystem
to represent our camera and coprocessor running PhotonVision.
72 // Simulated Vision System.
73 // Configure these to match your PhotonVision Camera,
74 // pipeline, and LED setup.
75 double camDiagFOV = 170.0; // degrees - assume wide-angle camera
76 double camPitch = Robot.CAMERA_PITCH_RADIANS; // degrees
77 double camHeightOffGround = Robot.CAMERA_HEIGHT_METERS; // meters
78 double maxLEDRange = 20; // meters
79 int camResolutionWidth = 640; // pixels
80 int camResolutionHeight = 480; // pixels
81 double minTargetArea = 10; // square pixels
82
83 SimVisionSystem simVision =
84 new SimVisionSystem(
85 "photonvision",
86 camDiagFOV,
87 new Transform3d(
88 new Translation3d(0, 0, camHeightOffGround), new Rotation3d(0, camPitch, 0)),
89 maxLEDRange,
90 camResolutionWidth,
91 camResolutionHeight,
92 minTargetArea);
Next, we create objects to represent the physical location and size of the vision targets we are calibrated to detect. This example models the down-field high goal vision target from the 2020 and 2021 games.
95 // See
96 // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
97 // page 208
98 double targetWidth = Units.inchesToMeters(41.30) - Units.inchesToMeters(6.70); // meters
99 // See
100 // https://firstfrc.blob.core.windows.net/frc2020/PlayingField/2020FieldDrawing-SeasonSpecific.pdf
101 // page 197
102 double targetHeight = Units.inchesToMeters(98.19) - Units.inchesToMeters(81.19); // meters
103 // See https://firstfrc.blob.core.windows.net/frc2020/PlayingField/LayoutandMarkingDiagram.pdf
104 // pages 4 and 5
105 double tgtXPos = Units.feetToMeters(54);
106 double tgtYPos =
107 Units.feetToMeters(27 / 2) - Units.inchesToMeters(43.75) - Units.inchesToMeters(48.0 / 2.0);
108 Pose3d farTargetPose =
109 new Pose3d(
110 new Translation3d(tgtXPos, tgtYPos, Robot.TARGET_HEIGHT_METERS),
111 new Rotation3d(0.0, 0.0, 0.0));
Finally, we add our target to the simulated vision system.
113 simVision.addSimVisionTarget(new SimVisionTarget(farTargetPose, targetWidth, targetHeight, -1));
114 SmartDashboard.putData("Field", field);
If you have additional targets you want to detect, you can add them in the same way as the first one.
Updating the Simulated Vision System
Once we have all the properties of our simulated vision system defined, the work to do at runtime becomes very minimal. Simply pass in the robot’s pose periodically to the simulated vision system.
122 public void update() {
123 double leftMotorCmd = 0;
124 double rightMotorCmd = 0;
125
126 if (DriverStation.isEnabled() && !RobotController.isBrownedOut()) {
127 leftMotorCmd = leftLeader.getSpeed();
128 rightMotorCmd = rightLeader.getSpeed();
129 }
130
131 drivetrainSimulator.setInputs(
132 leftMotorCmd * RobotController.getInputVoltage(),
133 -rightMotorCmd * RobotController.getInputVoltage());
134 drivetrainSimulator.update(0.02);
135
136 // Update PhotonVision based on our new robot position.
137 simVision.processFrame(drivetrainSimulator.getPose());
138
139 field.setRobotPose(drivetrainSimulator.getPose());
140 }
The rest is done behind the scenes.