TypeScript Models And Configuration
This page covers the request, response, and configuration models that TypeScript SDK users work with directly.
Import Pattern
Import these classes from the package root.
import { EOIAddStreamInputs, EOIRegion, EOIStreamInfo } from "eyesonit-typescript-sdk";
Building Detection Configurations
The main detection-building classes are:
EOIRegionEOIVertexEOIDetectionConfigEOIObjectDescriptionEOIDetectionConditionEOIMotionDetectionEOILine
Example:
import {
EOIRegion,
EOIVertex,
EOIDetectionConfig,
EOIObjectDescription,
EOIDetectionCondition,
} from "eyesonit-typescript-sdk";
const region = new EOIRegion();
region.name = "Front Gate";
region.polygon = [
new EOIVertex(0, 0),
new EOIVertex(1920, 0),
new EOIVertex(1920, 1080),
new EOIVertex(0, 1080),
];
const config = EOIDetectionConfig.default();
config.class_name = "vehicle";
config.class_threshold = 30;
config.object_descriptions = [
new EOIObjectDescription("white delivery truck", false, true, 75),
new EOIObjectDescription("road and buildings", true, false),
];
config.conditions = [
new EOIDetectionCondition("count_greater_than", 0),
];
region.detection_configs = [config];
Create EOIRegion and EOIDetectionConfig objects first, then assign properties.
Request Models
Image And Stream Requests
| Class | Purpose | Important Fields |
|---|---|---|
EOIProcessImageInputs | Process one base64 image | base64Image, regions, return_image, effects |
EOIAddStreamInputs | Register a stream for monitoring | stream_url, name, frame_rate, index_for_search, search_index_types, regions, lines, notification, recording, effects |
EOIMonitorStreamInputs | Start monitoring a stream | streamUrl, durationSeconds |
EOIProcessVideoInputs | Submit a video-processing job | name, input_video_path, rotate_video, output_video_path, frame_rate, regions, lines, real_time, recording, effects, video_start_local_time, start_seconds, end_seconds, mode, base_image_path, plugins, validation |
EOIUpdateConfigInputs | Wrapper for /update_config payloads | body |
Search Requests
EOISearchInputs is the shared base class for live and archive search. Its fields are:
class_namesearch_typeobject_descriptionface_match_typeface_person_idface_group_idalert_thresholdsimilaritystream_list
Derived classes:
| Class | Adds |
|---|---|
EOIArchiveSearchInputs | start_date_time, end_date_time |
EOILiveSearchInputs | duration_seconds, notification |
EOIUpdateLiveSearchInputs | search_id for pause/resume/cancel |
Face Recognition Requests
| Class | Purpose | Important Fields |
|---|---|---|
EOIAddFacerecGroupInputs | Create a face-recognition group | group_id, group_name, group_description |
EOIAddFacerecPersonInputs | Create a person profile | person_id, person_display_name, person_groups, person_images |
EOIAddFacerecPeopleInputs | Bulk import people from a file | file_path |
EOIAddFacerecPersonInputs also includes:
addImageBase64(image: string, file_path: string)
That helper adds an image record and sets capture_time to the current timestamp.
Configuration And Helper Models
Notifications, Recording, And Effects
| Class | Fields |
|---|---|
EOINotification | phone_number, include_image, plus server-returned alerting, rest_url, last_detection |
EOIRecording | enabled, record_with_alert, record_with_detection, record_with_motion, record_combined_confidence_threshold, save_detection_data, save_original_copy, recording_folder, output_file_name, include_stream_name, video_recording, image_recording |
EOIVideoRecording | enabled, record_all_frames |
EOIImageRecording | enabled, record_full_frame, record_object_bounds, record_all_frames, frame_record_interval |
EOIEffects | overlay and output flags such as show_motion, show_bounding_boxes, show_bounding_box_labels, show_lines, show_regions, show_preliminary_detections, show_validated_detections, blur_non_validated_detections, show_confidence_levels, show_object_count, show_frame_number, show_track_id, show_alert_text, font_scale |
Face Recognition And Similarity
| Class | Fields |
|---|---|
EOIFaceRecognitionConfig | match_type, match_threshold, person, group |
EOISimilarityConfig | images |
EOISimilarityImage | seed_id, image, alert, threshold |
Supported face recognition match types are:
persongroupall_faces
EOISimilarityImage.fromJsonObj(...) also supports image_path, which is read from disk and converted to base64 before the request is built.
Video Validation
EOIValidation and EOIValidationTrigger are used by EOIProcessVideoInputs.validation.
Fields:
EOIValidation.triggersEOIValidationTrigger.class_nameEOIValidationTrigger.start_secondsEOIValidationTrigger.end_seconds
Response And Result Models
The most commonly consumed response payload models are:
| Class | What It Represents |
|---|---|
EOIStreamInfo | stream details returned by stream APIs |
EOIImageDetection | one image-processing detection result |
EOIVideoDetection | one video or monitored-stream detection result |
EOIDetectionObject | one detected object with class, prompt, face, or similarity metadata |
EOISearchResult | one archive-search result |
EOIFaceDetectionObject | face-recognition match details |
EOISimilarityDetectionObject | similarity match confidence |
EOIBoundingBox | object bounds |
EOILastDetectionInfo | notification-side last-detection metadata |
Useful helpers on detection/result models include:
EOIImageDetection.getDetectedObjects()EOIImageDetection.getMaxConfidenceForDescription(description)EOIImageDetection.getMaxConfidenceDescription()EOIVideoDetection.getDetectedObjects()EOIVideoDetection.getMaxConfidenceDescription()EOIDetectionObject.getConfidenceForDescription(description)EOIDetectionObject.getMaxConfidenceDescription()
Defaults And Factory Helpers
Convenience helpers:
EOIDetectionConfig.default()EOIMotionDetection.default()EOIFaceRecognitionConfig.default()EOISimilarityConfig.default()EOISimilarityImage.default()EOIImageRecording.noImageRecording()EOIVideoRecording.noVideoRecording()
Use them when you want a valid starting object and then override only the fields you care about.
Validation Rules
The SDK-side validator enforces these constraints before a request is sent:
stream_urlmust be non-empty- stream names, region names, and line names must be at least 3 characters
- polygons must contain at least 3 vertices
- lines must contain at least 2 vertices
- vertex coordinates cannot be negative
- valid object classes are
person,vehicle,bag,animal, andunknown object_sizemust be at least100when specified- video and stream
frame_ratemust be at least1 - motion detection threshold must be at least
10 - live search
alert_thresholdmust be greater than0and less than100 search_idfor live-search updates must be-1or greater than0- archive search dates must be valid ISO date-times on or after
2020-01-01T00:00:00Z - group and person IDs for face recognition must be present for the selected match mode
- similarity configs must include at least one valid
EOISimilarityImage
API Notes
EyesOnItAPI.updateConfig(...)acceptsEOIUpdateConfigInputs, and the nestedbodypayload should match the configuration accepted by your EyesOnIt server.- The response wrappers all flatten the raw
EOIResponseinto typed fields instead of exposingdatadirectly. processImage(...)injects the request image into the API body asfile.removeStream(...),stopMonitoringStream(...),getStreamDetails(...),getLastDetectionInfo(...), andgetVideoFrame(...)take raw string arguments.