camtrap guide cover

Colophon

Suggested citation

Reyserhove L, Norton B & Desmet P (2023) Best Practices for Managing and Publishing Camera Trap Data. GBIF Secretariat: Copenhagen. https://doi.org/10.35035/doc-0qzp-2x37

Contributors

Tanja Milotic and Pieter Huybrechts contributed to the introduction and figures.

Licence

The document Best Practices for Managing and Publishing Camera Trap Data is licensed under Creative Commons Attribution-ShareAlike 4.0 Unported License.

Document control

v1.0, December 2023

Abstract

Camera traps have emerged as important tools for monitoring the state of biodiversity and natural ecosystems. The proliferation of data from such sensors has made data management, rather than data collection, the limiting factor in camera trap-related research. This guide provides recommendations for camera trap data management and publication to GBIF. It is intended for anyone running a camera trap study, in particular data stewards, data publishers, and others working in biodiversity informatics.

1. Introduction

1.1. Why this guide?

Camera traps have emerged as a powerful technology for the semi-automated monitoring of natural ecosystems. Their success has led to an exponential growth of camera trap data worldwide. Herein lies a major challenge: large volumes of data are waiting to be classified, interpreted and archived. Data management, rather than data collection, has become a limiting factor for camera trap research. The proliferation of camera trap projects has also led to a diversity of terminologies, classification methods and data management practices, which impedes transparency, interoperability, cross-project collaboration or meta-analyses. A large, interconnected network of remote cameras could act as an instrument for reliable, real-time biodiversity management and decision making, but if we are not able to combine data, camera trap research will lose its full potential. This is why camera trap data needs to be open and FAIR (findable, accessible, interoperable and reusable, see Wilkinson et al. (2016)), so that both humans and machines can use this valuable data for present and future applications.

To optimize the (re)use of camera trap data, there is a need for best practice guidelines. Several guides exist that tackle one or more elements of the camera trap research life cycle: planning, technology and techniques, study design, data collection, analysis, data management methods and data publication (O’Connell et al. 2011; Rovero et al. 2013; Meek et al. 2014; Cadman et al. 2014; Burton et al. 2015; Wearn and Glover-Kapfer 2017). Since then, the camera trapping community has made significant progress:

No up-to-date guidelines are available that focus on camera trap data management and publication only. This guide aims to fill that gap.

1.2. Target audience

This guide is intended to be useful for anyone managing a camera trap study. The specific focus on data management and publication makes this best practice guide extra useful for profiles such as data stewards, data publishers, database and information managers and students working with biodiversity informatics.

The authors of this guide have experience with traditional camera traps (designed for medium-to-large-sized terrestrial mammals), but many of the recommendations, as well as the Camtrap DP standard, should be applicable to data from other types of camera traps.

1.3. What this guide is not about

This guide primarily focuses on the management, quality control, enhancement and publication of camera trap data. The following topics are out of scope:

  • Planning a camera trap study: types of camera traps, study design, etc.

  • Camera trap deployment and collection: field work, baits and lures, data retrieval, etc.

  • Analysis: software for analysis, ecological modelling, bias correction, etc.

  • Data from moving cameras: underwater robots, vehicle-mounted cameras, drones, etc.

Many extensive guides on these topics are already available. See Table 1 and Table 2 for a brief overview.

2. The use of camera traps

2.1. What are camera traps?

Camera traps are recording devices that are deployed in the field to automatically capture images or videos of wildlife activity. They are also known as game cameras, trail cameras or scouting cameras. They can record media at a regular interval (time lapse) or when triggered by the activity of an animal. Traditionally, camera traps refer to those designed to record medium-to-large-sized terrestrial mammals with a passive infrared (PIR) sensor (Hobbs and Brehme 2017), but other types exist for e.g. the marine environment or insects. Just like satellites, drones, GPS trackers or acoustic sensors, camera traps collect machine observations. Acoustic sensors in particular are quite similar to camera traps, using audio rather than image/video to monitor the surrounding ecosystem. All these technologies have the benefit that they can collect data at a scale and frequency that would be challenging to obtain through human observations, and typically suffer less from human bias, interpretation or interference.

2.2. Why are camera traps used?

In the past decades camera traps have been increasingly used to collect biodiversity data in a non-invasive manner with minimal disturbance of wildlife. As early as in the 1890’s, George Siras was the first to develop a method using tripwire and a flash system in which wild animals photographed themselves (Kucera and Barrett 2011). The first scientific camera trap studies date back to the beginning of the 20th century (Chapman 1927). Since those pioneering days, technological advances in digital photography and infrared sensors have led to cost-effective, non-invasive detections of elusive wildlife (Burton et al. 2015). Camera traps have become popular research tools. They are easy to install, relatively cheap and do not require special permissions or training and are therefore being used by professional researchers and hobbyists at a very broad scale. Consequently, the number of annual publications concerning camera trap studies has grown more than 80-fold since the 1990’s. Camera trap technology is used to sample communities of medium-to-large-sized mammal and bird species inhabiting freshwater, terrestrial, fossorial, arboreal and marine habitats and have proven to be excellent tools to help biodiversity monitoring initiatives (Delisle et al. 2021).

The most frequently studied animal taxa include ungulates, carnivores, primates and birds, although the most innovative sensors allow the detection of small mammals, amphibians, reptiles, fish and invertebrates as well (Hobbs and Brehme 2017). Camera trap technology is suitable to gather occurrence data, as well as abundance, density, diversity and distribution of species (Table 1) and to answer behavioural questions such as activity patterns and responses to human disturbance. Furthermore, camera traps are unselective in species observations and are therefore often used in species interaction studies. “Bycatch” data from target species studies could be useful for other studies as well. They are often used to monitor rare, threatened and endangered species in remote and inaccessible terrains.

Table 1. Aims and outputs of camera trap studies as taken from Wearn and Glover-Kapfer (2017).
Study Aim Output Key references

abundance

density

Sollmann et al. (2012); Tobler and Powell (2013); Rowcliffe et al. (2008); Rowcliffe et al. (2016)

abundance

relative abundance

Rowcliffe et al. (2008); Wearn et al. (2013); Cusack et al. (2015)

distribution

occupancy

Mackenzie and Royle (2005); Guillera-Arroita et al. (2010); O’Brien (2010); Shannon et al. (2014)

diversity

beta-diversity

Tobler et al. (2008); Cusack et al. (2015)

diversity

diversity indices

diversity

richness

species presence

species checklist

Tobler et al. (2008); Wearn et al. (2013)

2.3. Camera trap project life cycle

The full life cycle of camera trap research includes the planning phase, the deployment of the camera traps in the field, the collection, management, analysis and sharing of data. This guide is not intended to cover all aspects of the camera trap project life cycle. The focus in this best practice guide lies on the management and publishing of the data. For readers interested in a general review of camera trap research, the planning phase, the deployment of camera traps in the field, we here provide an overview of the available resources (see Table 2).

Table 2. Stages in the camera trap life cycle and key references.
Topic What Reference

Full camera trapping life cycle

General state of the art
Future developments and camera trap constraints

Cadman et al. (2014); Rovero and Zimmermann (2016); Wearn and Glover-Kapfer (2017); Meek et al. (2020); Glover‐Kapfer et al. (2019)

Planning

How to design a camera trap study?
What type of camera traps to choose?

Wearn and Glover-Kapfer (2017); Rovero et al. (2010); Sunarto et al. (2013); Meek et al. (2014); Kays et al. (2020); Caravaggi et al. (2020); Hobbs and Brehme (2017); McIntyre et al. (2020)

Deployment and collection

Where and how to mount your camera?
Camera trap storage and maintenance
Baits and lures
Combating theft and vandalism
Set-up in different environments

Wearn and Glover-Kapfer (2017); Rovero et al. (2010); Rovero et al. (2013); Meek et al. (2014)

Data management and sharing

Data management considerations
From field to hard-disk
Annotating camera trap data
Software for data management
Sharing and publishing camera trap data

Section 3 and Section 4 of this guide

Analysis

Software for analysing data

Sunarto et al. (2013)

For a camera trap dataset to be useful, you should clearly define the aim and objectives in the planning phase: what would you like to know from which species groups? Each aim brings along its own key characteristics to consider, such as camera height and direction, seasonality, bait usage, detection zone features, camera settings, trigger and flash type. These characteristics must be included in the published dataset in order to be useful in consequent analyses. We recommend consulting the key references in Table 1 for readers interested in this topic. Additionally, Wearn and Glover-Kapfer (2017) provides a comprehensive overview of more general, camera trap survey design aspects.

3. Managing camera trap data

Once a camera trap is operative in the field, it can generate hundreds, thousands or even millions of pictures or videos in the time frame of the project. Ultimately, these data need to be analysed. But first, data need to be retrieved, stored, organized and labelled. The content of the images needs interpretation or annotation, either manually or facilitated by technology. All these steps are considered in the data management process.

Data management is one of the real bottlenecks in camera trap research. The massive amount of information stored on memory cards requires a high investment in terms of human effort and time. If data storage and classification is lagging behind, large volumes of data will remain unused and will get lost eventually. Manual extraction, organization and labelling can introduce human errors and can lead to data loss. In addition, there is often more information in the image or video than the sole target species (group) or topic that is the focus of the project. Camera traps inherently detect multiple species and observations of non-target species provide valuable data for other research objectives. If all data, including non-target species, were annotated, more relevant outcomes to funds could be generated (Young et al. 2018; Wearn and Glover-Kapfer 2017).

To conclude, every researcher should pay attention to data hygiene. The underlying idea is: rather have ten well-documented elements, rather than a hundred poorly documented ones. Only in this way can we turn the cacophony of raw image data into useful quantitative data.

In this section, we introduce a number of general conventions for sound data management data. We zoom deeper into what camera trap data exactly is. For each type of camera trap data (metadata, media, deployments, observations), we focus on how to best manage these, and what data management platforms you can use to facilitate you in doing so.

3.1. What are camera trap data

Intuitively, we associate camera trap data with the media files captured by the cameras. But to be able to use these for research they need to be documented with additional information. Media files should be classified to know what species were observed. Information on camera deployment, duration, location, alignment and sampling methodology is needed to know when, where and how likely those species were to be observed. Finally, we need information about the project/study as a whole to know the scope and who was involved.

Overall, we can distinguish four types of camera trap data:

  • Project metadata: information about the camera trap project/study as a whole

  • Media files: images, videos or sound files captured by the cameras, including their EXIF metadata

  • Deployments: information regarding the camera location and alignment, the sampling duration and covariates. Typically not automatically registered by the camera.

  • Observations: information regarding what can be seen or heard on the media files (i.e. objects of interest), such as animals, humans or vehicles. The aim is typically to record what animal species were observed, optionally including information on their group size, life stage, sex and behaviour.

These data can be organized in different ways, based on personal preference or what data management system is used. We can however identify a number of core concepts, hereafter referred to as classes, that constitute a “model” for camera trap data (see Figure 1). Those classes are project, organization, participant, deployment, location, device, media, sequence and observation.

Class diagram
Figure 1. Class diagram for camera trap data, constituting of different classes (boxes) and relationships (lines). Mandatory classes are dark blue, optional classes light blue. The line ends represent the cardinality of the relationship, with 0| zero or one, 0< zero to many, |< one to many, and || exactly one. A deployment for example has zero to many media, while a media belongs to exactly one deployment.

3.2. Project Metadata

Project metadata document a camera trap project/study as a whole. Who was involved, what was the rationale, what sampling methods were used and what was the scope? Projects can vary a lot in size and are sometimes part of meta-studies or subdivided in subprojects to better manage their hundreds of thousands of deployments (e.g. Snapshot USA). Describe projects at a level that makes sense. Is it possible to identify a person that can make decisions or answer questions about the project? Is it easy to describe the methodology? If the answer to those questions is no, then it might be better to consider those separate projects. Also note that we strongly recommend publishing data at project level, i.e. one dataset for one project (see Section 4).

When published, project metadata are a substantial part of the dataset metadata (see Figure 2), allowing others to discover your dataset when searching for certain keywords and to assess if it fits their research needs. When describing your project, think about what others would need to understand it. We recommend to cover the following aspects (but do not let perfection be the enemy of good):

  • Title

  • Identifier and/or acronym

  • Description (including rationale)

  • Contributors and their roles (including organizations)

  • Link to website

  • Keywords

  • Funding

  • Sampling design (simple random, systematic random, etc.)

  • Resulting geographic scope

  • Capture method (type of sensor, motion detection and/or time lapse)

  • Capture schedules (continuous, nightly, etc.)

  • Capture outages and unplanned events

  • Resulting temporal scope

  • Classification method (experts, crowdsourcing, AI)

  • Classification granularity and scope (i.e. which detectable organisms were classified: all animals, mammals only, known individuals only, etc.)

  • Resulting taxonomic scope

  • Data filtering before publication

Data management systems typically organize this information into the following classes: project, subproject, organization and participant (see Figure 1).

example metadata
Figure 2. Screenshot of a camera trap dataset published on GBIF (Cartuyvels et al. 2022). A substantial part of its metadata are derived from the project metadata.

3.2.1. Participants and roles

A participant is a person associated with a camera trap project. Information typically captured about a participant is their first name, last name, email, and ORCID. The role(s) of a participant is defined in relation to a project (e.g. principal investigator, contact person) and organization (e.g. researcher) (see Figure 1). Different names are used for similar roles (see Table 3). We recommend simplifying those to a limited set of controlled values (e.g. package.contributors.role) when publishing data.

Table 3. Participant roles in camera trap studies, as defined by different formats and data management systems.
Camtrap DP CTMS (Forrester et al. 2016) Wildlife camera metadata protocol (Resources Information Standards Committee RISC 2019) DataCite (DataCite Metadata Working Group 2021) EML (GBIF Secretariat 2015) Agouti (Casaer et al. 2019) Wildlife Insights (Ahumada et al. 2020)

contact

ProjectContact

Project Coordinator

ContactPerson

Point of Contact

Project coordinator

Project Owner

principalInvestigator

PrincipalInvestigator

ProjectLeader
ProjectManager
Supervisor

Owner
Principal Investigator

Principal investigator

Project Owner

rightsHolder

RightsHolder

publishers

Distributor

Distributor Publisher

contributor

sequenceIdentifiedBy
PhotoTypeIdentifiedBy

Crew Member
Surveyor

DataManager
DataCurator
DataCollector
ProjectMember
Researcher

Curator
Editor
Author
Content Provider
Originator

Admin
Taxonomic expert
Photo processor
Volunteer

Project Editor
Project Contributor
Project Tagger

Other

User Processor Reviewer Metadata Provider

View only Dummy Awaiting access

Project Viewer

3.3. Media files

Media files are the raw data a camera trap collects. For most camera trap studies, these will be images (see Figure 3 for an example), but modern camera traps can record other types of media types as well, such as video or sound. Videos can capture animal behaviour in more detail than images and are often suitable for outreach, but require more battery power, larger file sizes and are harder to process.

An often used compromise is to take a series of images when a camera is triggered (e.g. 10 images, 1 second apart). When processing the media files, those related images can be combined in a sequence. A sequence not only combines images resulting from a single trigger, but also consecutive triggers that fall within a preset independence interval (e.g. 120s). That way, continued activity is captured in a single sequence/event (see Table 4).

example image
Figure 3. An image captured by a camera trap deployed as part of the MICA project (Life MICA 2019). It is the fifth of a series of ten images and indicates the date, time and temperature. It is a black and white photo of a creek occupied by three birds: a grey heron (Ardea cinerea) in the foreground and a female and male mallard (Anas platyrhynchos) in the background. Source.
Table 4. A series of images, resulting from 3 consecutive triggers and captured in one sequence. Source.
Trigger Media ID Timestamp File path

1

e68deaed

2020-06-12T04:04:29Z

https://multimedia.agouti.eu/assets/e68deaed-a64e-4999-87a3-9aa0edf5970d/file

1

c5efbcb3

2020-06-12T04:04:30Z

https://multimedia.agouti.eu/assets/c5efbcb3-34f5-4a59-bc15-034e01b05475/file

1

07eee194

2020-06-12T04:04:31Z

https://multimedia.agouti.eu/assets/07eee194-85c7-4586-96be-7b42ff6f1132/file

1

479a93c4

2020-06-12T04:04:31Z

https://multimedia.agouti.eu/assets/479a93c4-bc70-4e91-9ab5-b058df232ed0/file

1

6d65f3e4

2020-06-12T04:04:32Z

https://multimedia.agouti.eu/assets/6d65f3e4-4770-407b-b2bf-878983bf9872/file

1

5ba57018

2020-06-12T04:04:32Z

https://multimedia.agouti.eu/assets/5ba57018-fd06-4319-bc80-ba6efa076c7c/file

1

c39a0749

2020-06-12T04:04:33Z

https://multimedia.agouti.eu/assets/c39a0749-b8db-4853-81c4-32b9a99868ca/file

1

d2ed4389

2020-06-12T04:04:34Z

https://multimedia.agouti.eu/assets/d2ed4389-14e6-45d7-b67d-b52d3cffd0fb/file

1

51549c25

2020-06-12T04:04:35Z

https://multimedia.agouti.eu/assets/51549c25-e565-4ece-a26e-12442ccc3fcb/file

1

b78bb29f

2020-06-12T04:04:35Z

https://multimedia.agouti.eu/assets/b78bb29f-fbf3-49b0-911b-ca4e5a95d801/file

2

d6785b65

2020-06-12T04:04:41Z

https://multimedia.agouti.eu/assets/d6785b65-24fa-4663-8539-e5fb261d069d/file

2

2b860458

2020-06-12T04:04:42Z

https://multimedia.agouti.eu/assets/2b860458-742b-4fca-937c-2a27742dccb0/file

2

d45648b9

2020-06-12T04:04:43Z

https://multimedia.agouti.eu/assets/d45648b9-76d1-4500-898c-dd3c3f31a0b8/file

2

eecd8ce1

2020-06-12T04:04:43Z

https://multimedia.agouti.eu/assets/eecd8ce1-2b13-49b7-bcec-c0056848aa62/file

2

48d26ebc

2020-06-12T04:04:44Z

https://multimedia.agouti.eu/assets/48d26ebc-ba6e-4245-8f52-c2cc1d64ef1f/file

2

4afd0344

2020-06-12T04:04:44Z

https://multimedia.agouti.eu/assets/4afd0344-3cec-4942-987b-96b69da75e6b/file

2

916964ac

2020-06-12T04:04:45Z

https://multimedia.agouti.eu/assets/916964ac-6389-4d06-8853-2eac6c36d8e7/file

2

3e8e355a

2020-06-12T04:04:46Z

https://multimedia.agouti.eu/assets/3e8e355a-6253-4a5f-a950-2f934821b7f7/file

2

b7792672

2020-06-12T04:04:46Z

https://multimedia.agouti.eu/assets/b7792672-6a31-484a-a97b-e19e34657021/file

2

1683dd3b

2020-06-12T04:04:47Z

https://multimedia.agouti.eu/assets/1683dd3b-7791-493a-84c6-1bb50541fd97/file

3

e6c63f88

2020-06-12T04:04:49Z

https://multimedia.agouti.eu/assets/e6c63f88-a31f-4f06-9410-3213baed08ab/file

3

91a1ba54

2020-06-12T04:04:50Z

https://multimedia.agouti.eu/assets/91a1ba54-5e19-4f18-88f8-8dd0dd3ef836/file

3

233a2f40

2020-06-12T04:04:51Z

https://multimedia.agouti.eu/assets/233a2f40-b0c5-4b93-90e4-e254d2e148f5/file

3

5e01e638

2020-06-12T04:04:51Z

https://multimedia.agouti.eu/assets/5e01e638-d36f-4ca2-957d-7bbdc76dcc89/file

3

dadf1718

2020-06-12T04:04:52Z

https://multimedia.agouti.eu/assets/dadf1718-90bd-438e-8649-3663f226072f/file

3

643d63a4

2020-06-12T04:04:52Z

https://multimedia.agouti.eu/assets/643d63a4-dd46-4b9d-b3de-665fe2a46754/file

3

19744c44

2020-06-12T04:04:53Z

https://multimedia.agouti.eu/assets/19744c44-03ea-438f-9dc7-927e6e494ee1/file

3

edc345bc

2020-06-12T04:04:54Z

https://multimedia.agouti.eu/assets/edc345bc-b58a-4c0d-8659-89132449cc3c/file

3

b6e435f8

2020-06-12T04:04:54Z

https://multimedia.agouti.eu/assets/b6e435f8-b22b-4916-8275-5fbff2d84a76/file

3

54c5d869

2020-06-12T04:04:55Z

https://multimedia.agouti.eu/assets/54c5d869-8492-4b16-a72

A camera also records metadata when creating a media file. This can include date and time, camera settings (like shutter speed, exposure level, flash status) and other properties. For images, this information is stored as part of the file and is expressed in the Exchangeable Image File Format (EXIF) (see Table 5). Metadata for videos is less standardized, although some formats like AVI and MOV support EXIF.

Data management systems typically organize media files and the associated metadata into the following classes: media, media type and sequence (see Figure 1).

Table 5. Selected properties included in the EXIF metadata of the image in Figure 3.
Property Value

File type

JPEG

MIME type

image/jpeg

Image width

2048 pixels

Image height

1440 pixels

Horizontal resolution

72 dpi

Vertical resolution

72 dpi

EXIF version

0220

Make

RECONYX

Model

HYPERFIRE 2 COVERT

Date time original

2020:06:12T06:04:32Z

Time zone offset

N/A

Exposure time / shutter speed

1/85

ISO

200

Colour Space

sRGB

Flash

Auto, Fired

Exposure mode

Auto

White balance

Manual

Scene capture type

Standard

3.3.1. Timestamps

The date and time a media file was recorded is the most important aspect of its metadata. This information is used to assess when animals were observed and cannot be derived later (in contrast with e.g. location). Since this information is derived from the camera’s internal clock, it is critical to verify it is set correctly. We recommend setting the clock to Coordinated Universal Time (UTC) or local winter time. Disable automatic switching to summer time and record the used time zone as part of the deployment.

3.3.2. File naming

Media files are best managed by a data management system. If you manage your media files yourself, then we recommend the following file and directory naming conventions:

  • Avoid renaming media file names. Rather, organize media files in one directory for each deployment. This also prevents raw file names from overlapping across cameras. Note that file paths may be used as identifiers in classification data.

  • Make sure that ordering files alphabetically also sorts them chronologically. This is likely already the case for sequentially assigned file names (e.g. IMG_4545.jpg). Otherwise, start the name with the date (YYYYMMDD) or date-time (YYYYMMDD_HHMMSS). This can also be useful for directory names.

  • If you are naming files, use snake case (image_1), hyphen case (image-1) or camel case (image1 or videoFile1) rather than white space (image 1). Avoid special characters.

  • Do not store classification information as part of the media file name.

  • Be consistent.

# Good
PICT0001.JPG
20200709_093352.JPG

# Bad: can't be sorted chronologically
09072020_093352.JPG

# Bad: contains classification information
20200709_093352_Ardea_alba_1_Anas_platyrhynchos_male_female.jpg

# Bad: contains spaces and special characters
dep 2021 't WAD

3.3.3. Storage

Due to the large volume of generated data, it is not trivial to securely store, backup and manage media files. Cloud services or well managed institutional services are recommended, but these come at a substantial cost. We recommend the use of an online data management system to store your media files. Some offer this storage for free. It is also very useful if your data storage system can serve media files over http/https, using allows stable URLs and optionally authentication. This allows you to directly reference/hotlink media files in a published dataset (see accessURI). Such a service is provided by e.g. Agouti (Casaer et al. 2019) (through https://multimedia.agouti.eu/assets/), Flickr (through https://www.flickr.com/services/api/) and Zenodo (through the undocumented https://zenodo.org/record/{record_id}/files/{file}).

3.4. Deployments

A deployment is the spatial and temporal placement of a camera. Deployments end by removing or replacing the camera, changing their position or swapping their memory card. The resulting media files are all associated with that deployment and are best organized as such. Deployment information includes camera location, duration, alignment and settings and other covariates such as bait use, feature type, habitat, canopy cover, etc. (see Table 6). This information is not captured by the camera and needs to be recorded manually. Note that even the duration can be longer than the timestamp of the first and last captured media file.

Data management systems typically organize deployments into the following classes: deployment, location, camera, deployment group and subproject (see Figure 1).

Table 6. Recorded information for the deployment that generated the image in Figure 3. Source.
Property Value

Deployment ID

00a2c20d

Start date/time

2020-05-30T04:57:37+02:00 (= 2020-05-30T02:57:37Z)

End date/time

2020-07-01T11:41:41+02:00 (= 2020-07-01T09:41:41Z)

Location ID

e254a13c

Location name

B_HS_val 2_processiepark

Latitude

51.496

Longitude

4.774

Coordinate uncertainty

187 m

Other location information

position:above stream

Camera set up by

anonymized:3eb30aa

Camera ID

320

Camera model

Reconyx-HF2X

Camera delay

0 s

Camera height

1.30 m

Camera depth

Camera tilt

-15 °

Camera heading

285 °

Detection distance

3.20 m

Timestamp issues

false

Bait use

false

Habitat

Campine area with a number of river valleys with valuable grasslands

3.4.1. Column naming

Deployment information is best recorded in a data management system. If you manage your deployment information elsewhere (e.g. a spreadsheet), then we recommend the following column naming conventions:

  • Use descriptive names, so users have an idea of what information to expect.

  • Separate words using snake case (deployment_location_1), hyphen case (deployment-location-1) or camel case (deploymentLocation1) rather than white space (deployment location 1). Snake case ensures the highest level of interoperability between systems, camel case is most often used in data standards.

  • Avoid abbreviations to mitigate the risk of confusion, except for well-known words like ID for identifier.

  • Avoid including units and data types. Describe these elsewhere (e.g. in a separate sheet, README document or Table Schema), together with the column definition and controlled values (see Table 7).

  • Be consistent.

# Good
scientificName
deployment_group

# Bad: contains spaces
scientific name

# Bad: abbreviated
dep_gr

# Bad: inconsistent naming
latitude & coordinatesLongitude

# Bad: includes unit or data type
camera_height_meter_double
Table 7. Example of how to describe deployment data in a separate spreadsheet.
Column name Definition Notes Unit Data type Required (y/n)

deploymentID

Unique identifier of the deployment.

This identifier is auto-generated.

string

yes

deploymentStart

Date and time at which the deployment started.

Formatted as YYYY-MM-DDTHH:MM:SSZ.

datetime

yes

deploymentEnd

Date and time at which the deployment ended.

Formatted as YYYY-MM-DDTHH:MM:SSZ.

datetime

yes

latitude

Latitude of the deployment location.

Uses the WGS84 datum.

decimal degrees

number

yes

longitude

Longitude of the deployment location.

Uses the WGS84 datum.

decimal degrees

number

yes

cameraHeight

Height at which the camera was deployed.

meters

number

no

cameraHeading

Angle at which the camera was deployed in the horizontal plane.

Values (0-360) start clockwise from north, so 0 = north, 90 = east, 180 = south, 270 = west.

decimal degrees

number

no

3.4.2. Location

A location is the physical place where a camera is located during a deployment. It can be described with a name, identifier and/or description, but we recommend always to record the geographical coordinates. Those are most commonly expressed as latitude and longitude in decimal degrees, using the WGS84 datum.

The coordinates are best determined using a GPS receiver at the location itself. If this is not possible, use (online) resources and georeferencing best practices (Chapman and Wieczorek 2020) to obtain those. In addition to the coordinates and geodetic datum (e.g. WGS84) it is important to record the uncertainty of the coordinates, which is affected by several factors:

  • The extent of the location. Note that for camera traps this includes the detection distance, which is typically between 5 and 20 m.

  • The accuracy of the GPS receiver or georeferencing resource. Most GPS receivers obtain an accuracy of 5 metres in open areas when using four or more satellites (Chapman and Wieczorek 2020). Forest canopy or limited satellite connection can reduce accuracy. Google Maps or Open Street Maps have an accuracy of 8m (Chapman and Wieczorek 2020).

  • The coordinate precision. The less precise (and closer to the equator) the higher the uncertainty, e.g. WGS84 coordinates with a precision of 0.001 degree have an uncertainty of 157 m at the equator (see Table 3 in Chapman and Wieczorek (2020)).

  • An unknown datum. This can range from centimetres to kilometres (Chapman and Wieczorek 2020), so it is important to always record the datum used by the GPS receiver or georeferencing resource (WGS84 for Google Maps or Open Street Maps).

  • The combined maximum uncertainty is most conveniently expressed as a coordinate uncertainty in metres, allowing the location to be described with the point-radius-method.

The combined maximum uncertainty is most conveniently expressed as a coordinate uncertainty in metres, allowing the location to be described with the point-radius-method.

Most other properties associated with a location such as country and state, but even elevation, slope, land cover or leaf area index, can be derived from the coordinates using an online resource.

3.4.3. Camera model, settings and alignment

Since a deployment relates to the placement of a camera, it is important to capture information regarding its model, settings and alignment. The model consists of the manufacturer and model name (e.g. Reconyx-PC800). Except for the quiet period, most camera settings are typically automatically recorded as part of the EXIF metadata. The detection distance can vary a lot depending on terrain and vegetation and is best measured in the field by having someone move in front of the camera at different distances. The alignment is the physical placement of a camera in 3D space. It consists of camera height, camera depth, camera tilt and camera heading.

3.4.4. Deployment groups

It can be useful to categorize deployments in deployment groups to facilitate their data management and analysis. A deployment group can be thematic (e.g. paired deployment), spatial (e.g. private land, open woodland) or temporal (e.g. summer 2005) in nature (see Figure 4). A single deployment can belong to zero or more deployment groups. Subprojects are a special kind of deployment group used to subdivide very large projects containing many thousands of deployments. This facilitates their management. A single deployment can belong to a single subproject.

deployment groups
Figure 4. Map showing a selection of deployments from the NC Candid Critters project (Lasky 2016). Deployments can be categorized differently based on the deployment group(s) they belong to. Left (A): deployment groups representing site type (forested area, open area, residential yard, trail), right (B): deployment groups representing property type (private, public). The project also used subprojects to group deployments per county (not show on figure).

3.4.5. Covariates

Covariates are variables that may affect the behaviour and thus detection of animals. Recording those is important for further analysis of the data. Bait, feature type and habitat type are commonly recorded covariates. What and how to record covariates should be consistent within a project, but is typically not so across projects, unless they form part of a larger well-coordinated research study. To aid interoperability, we recommend making use of existing classification systems to record covariates:

3.5. Observations

Observations are an interpretation of what can be seen or heard on media files. These are not limited to species observations, but can also indicate whether the media file contains a vehicle, human or unknown object, or that nothing of interest was observed (blanks). That is why they are sometimes also called classifications, annotations or identifications. The aim is typically to record what animal species were observed, optionally including information on their group size, life stage, sex and behaviour (see Table 8).

Observations are best recorded in a data management system, which will typically organize observations into the following classes: observation, observation type and sequence (see Figure 1). If you manage your observation information elsewhere (e.g. a spreadsheet), then we recommend to follow the same column naming conventions as for deployments.

Table 8. Recorded information for one of the observations that is based on the image in Figure 3. It is classified at event level (sequence) in the camera trap management system Agouti. Source.
Property Value

Observation ID

05230014

Observation type

animal

Taxon ID

GCHS

Scientific name

Ardea cinerea

Count

1

Life stage

adult

Classification method

human

Classified by

Peter Desmet

Classification timestamp

2023-02-02T13:57:58Z

3.5.1. Classification

Unfortunately, camera traps do not provide observations directly. Media need to be classified to obtain observations. This process can be performed in different steps and with different levels of precision and granularity:

  • Media does or does not contain object(s) of interest.

  • Object(s) of interest is a human or vehicle, or cannot be identified.

  • Object(s) of interest is an animal, identified at a high taxonomic level (e.g. a rodent).

  • Animal is identified at species or subspecies level (e.g. Sus scrofa).

  • Animal is identified as a known individual (e.g. wolf Noëlla).

  • Other properties of the animal are recorded, such as group size, life stage, sex, and behaviour.

Classification can be done by humans and/or machines. These actors (experts, volunteers, AI) can reach different levels of precision (see Table 9), based on their expertise (can I reach such a precision?) and effort (do I want to reach such a precision?). Since classification can be very labour intensive for larger studies, it is best to use an approach that yields the necessary data efficiently. Citizen scientists, artificial intelligence and/or classifying events rather than individual media can help to speed up the process (Green et al. 2020). Whatever the technique, we recommend to always record who made the classification and what type of technique (human vs machine) was used.

Table 9. Observation records for four observed ducks, but provided at different levels of precision. Row one is the result of a classification at a high taxonomic level (family Anatidae). Row two is the result of a classification at species level (Anas platyrhynchos), but no further characteristics were recorded. Rows three and four are the result of a classification that further specified one duck to be male and showing foraging behaviour. Source.
scientificName sex count behavior

Anatidae

4

Anas platyrhynchos

4

Anas platyrhynchos

male

1

foraging

Anas platyrhynchos

3

3.5.2. Citizen science

Citizen scientists are volunteers from the non-scientific community that help scientists in their work. They can contribute to camera trap studies in a number of ways, such as placing cameras and collecting/swapping memory cards. In a practice called crowdsourcing, researchers can also distribute the task of classifying media, by presenting these online to a community of citizen scientists. Each classification helps to confirm or improve the community’s opinion on the observed species (Swanson et al. 2015; Hsing et al. 2018).

Most projects use established online platforms for crowdsourcing (Fortson et al. 2012; Swanson et al. 2015), (Chimp&See), such as Zooniverse (Simpson et al. 2014), MammalWeb (Bradley 2017), Digivol (Alony et al. 2020) or DoeDat (Groom et al. 2018). These platforms give access to large, already existing volunteer bases, which is particularly important if classifications are needed within a short time frame. Note however that managing a citizen science project takes time and might be more beneficial for larger studies. In addition to uploading media to a platform, waiting for classifications, downloading consensus observations and dealing with non-consensus observations, you need to keep the community engaged and/or attract new members. It is also important to exclude sensitive media from the process, such as media containing humans (to protect their privacy) and rare species. This will require some type of preprocessing, which is where artificial intelligence (AI) comes in (Weinstein 2018).

3.5.3. Artificial intelligence

In the context of camera trap research, artificial intelligence (AI) typically refers to the use of computer vision for classification. These computer models are trained with already classified datasets and can process millions of media in a fraction of a time it would take a human (Norouzzadeh et al. 2021). The field has seen significant advancements in recent years and models are now able to filter out blanks and media containing humans, recognize species, count or track individuals, as well as recognize individual animals (Price Tack et al. 2016; Gomez Villa et al. 2017; Nguyen et al. 2017; Brides et al. 2018; Norouzzadeh et al. 2021; Yousif et al. 2018). New models are coming out every year, but especially their incorporation in data management systems will increase their use, especially by users that have no experience in machine learning. As such, computer vision will likely become the dominant technique to classify camera trap data in the near future.

Still, computer vision will not entirely replace human classification, since a large and diverse number of preprocessed data are needed to train the models. Unbalanced training datasets may produce low performance of the models, such as training datasets with a highly variable number of images of each species, or small and geographically limited datasets. Additionally, the accuracy of computer vision classification is currently still secondary to that of a human expert. A combination of AI-aided preprocessing and human verification is therefore recommended.

3.5.4. Media- or event-based classification

Classifications can be based on a single media file (typically an image) or an event (typically a sequence of images). In the latter technique, all media files that belong to the event are assessed as a whole to determine the species and their number of individuals. This is less time consuming for human classifiers and can lead to better estimates of group size, since the number of individuals passing by a camera can be larger than those that can be seen in a single image. The disadvantage of event-based classification is that it is not possible to split the classification into events that are shorter than the one that is assessed (the same is true for videos classified as a whole). Nor can those classifications be used to train computer models, which require media-based training datasets.

As a result, data management systems may favour one technique over the other, or offer both. Resulting datasets can include media-based, event-based or both types of classifications.

3.5.5. Common or scientific names

Media can be classified using common (e.g. roe deer) or scientific names (e.g. Capreolus capreolus) for taxa. Common (or vernacular) names are easier to remember and allow for better public engagement. The downside is that they are subject to translation, can vary regionally, sometimes refer to different species (e.g. “elk” in North America refers to Cervus canadensis, while in Europe it is used for Alces alces) and might not exist for every species or language combination. Scientific names on the other hand follow strict nomenclatural rules, are globally consistent and are not subject to translation. We therefore recommend to always store the scientific name as part of the observation, even if only common names are presented to the user.

The list of scientific names that are available for classification in a project is best maintained in a single reference table. This facilitates the management of taxonomic classification and associated common names, and allows to restrict classification options to those species that are likely to occur. More taxa can be added if needed, but only after verification. This practice is used by most data management systems. To populate such a reference table, we recommend using an authorative source (see Table 10) and storing the taxon identifiers used by that source as reference.

Table 10. Selection of sources for scientific names, common names and taxonomic information.
Source Taxonomic coverage Use for

Catalogue of Life (Bánki et al. 2023)

All

Scientific names
Common names (select languages)
Taxonomy

World Register of Marine Species (WoRMS) (WoRMS Editorial Board 2023)

Marine species (not exclusively)

Scientific names
Common names (many languages)
Taxonomy

Wikipedia (English and other language versions)

All

Common names (many languages)

Clements Checklist of Birds of the World (Clements et al. 2022)

Birds

Scientific names
Common names (English)
Taxonomy

IUCN Red List of Endangered Species

Mammals

Scientific names
Common names (select languages)
Taxonomy

American Society of Mammalogists Mammal Diversity Database

Mammals

Scientific names
Common names (English)
Taxonomy

3.6. Data management systems

Managing camera trap data can be daunting, especially for larger projects. Luckily, a number of software tools and platforms have been developed to help researchers with some or all of the aspects of camera trap data management (Young et al. 2018). These initiatives were often started by research teams to facilitate their own needs, but some have grown to mature systems that can be used by anyone. We discuss and recommend five of those below (see Table 11 for an overview of their features). They support the entire life cycle of camera trap data management:

  • Create one or more projects

  • Invite collaborators with different levels of access

  • Upload media and creating deployments

  • Classify media to observations, optionally supported by AI and citizen science

  • Manage reference lists of species, locations, covariates, etc.

  • Engage the public by making some or all project metadata available on a website

  • Export data in a standardized format for further analysis and data publication

  • Archive data, including media files

3.6.1. Agouti

Agouti (Casaer et al. 2019) (https://agouti.eu) is an online system for managing camera trap data. It is maintained by Wageningen University & Research and the Research Institute for Nature and Forest (INBO), based respectively in the Netherlands and Belgium. Agouti is mainly used by European projects and is free to use.

Classification is event-based, but animal positions can be recorded at media level, allowing to record the necessary data for distance analyses (Howe et al. 2017) and random encounter modelling (Marcus Rowcliffe et al. 2011). AI classification is possible, using a dedicated species classification model that is updated regularly. Media containing humans are always hidden from the public. Data are stored on university infrastructure, which also offers long-term archival and hot-linking to media. Project metadata can be made available via a public portal. Data can be exported as Camtrap DP.

Agouti is a good choice for organizations who want a free full-feature European based service.

3.6.2. Camelot

Camelot (Hendry and Mann 2018) (https://camelotproject.org/) is a local system for managing camera trap data. It is maintained as a volunteer initiative based in Australia. Camelot is free to use, open source, available for all major operating systems and requires installation. It is typically used as a local desktop application, but can be set up on a server allowing multiple users to connect via their browser. Classification is media-based with the option to classify multiple media at once. AI classification is not offered. Data can be exported in a custom format.

Camelot is a good choice for organizations and individuals who want a light-weigh solution they can manage themselves.

3.6.3. TRAPPER

TRAPPER (Bubnicki et al. 2016) (https://os-conservation.org/projects/trapper) is an online system for managing camera trap data. It is maintained by the Open Science Conservation Fund, based in Poland. TRAPPER is mainly used by European projects and is free to use. The software is open source and requires installation and hosting. Classification is media-based with the option to classify multiple media at once. AI classification is possible, using existing species classification models. Data can be exported as Camtrap DP.

TRAPPER is a good choice for organizations who want control over the software and where their data are stored.

3.6.4. Wildlife Insights

Wildlife Insights (Ahumada et al. 2020) (https://www.wildlifeinsights.org) is an online system for managing camera trap data. It is maintained by Conservation International, Google and other partners, based in the United States. Wildlife Insights is mainly used by projects in the Americas and uses a tiered subscription model (including free tiers). Uploaded media are automatically classified at media level by AI, using a dedicated species classification model developed by Google. Media containing humans are always hidden from the public. Further classification has the option to classify multiple media at once. Data are stored in the cloud, can be used by Wildlife Insights to train AI and must be made public after a maximum embargo period of maximum 48 months. Project metadata is always available via a public portal. Data can be exported in a custom format, based on CTMS (Forrester et al. 2016).

Wildlife Insights is a good choice for organizations who want a full-feature service with powerful AI and open data requirements.

3.6.5. WildTrax

WildTrax (Bayne et al. 2018) (https://www.wildtrax.ca/) is an online system for managing camera trap data. It is maintained by the University of Alberta, based in Canada. WildTrax is mainly used by Canadian projects and is free to use (except for very large projects). Classification is media-based with the option to classify multiple media at once. AI classification is possible, but only at a broad level (blanks, animals, vehicles), species classification is not (yet) offered. Data are stored in the cloud. Project metadata can be made available via a public portal. Data can be exported in a custom format (with associated R package).

WildTrax is a good choice for organizations who want a free service based in Canada.

Table 11. Comparison of features offered by five data management systems. Features that are the same for all systems are not shown.
Feature Agouti Camelot TRAPPER Wildlife Insights

Provided as

Service

Software

Software

Service

Cost

Free

Free

Free

Tiered subscription model (incl. free)

Open source

No

Yes

Yes

No

Supported media types

Image, Video

Image

Image, Video

Image

Multiple users roles

Yes

Yes (limited)

Yes

Yes

Supported languages

English, Croatian, Dutch, French, German, Polish, Spanish

English

English

Many (via Google translate)

Media- or event-based classification

Event based

Media based

Media based

Media based

AI classification

Yes (species classification)

No

Yes (species classification)

Yes (species classification)

Integration with crowdsourcing platform

Yes (Zooniverse)

No

Yes

No

Project portal

Yes

No

No

Yes

Data storage

University infrastructure

Own server

Own server or cloud

Cloud (Google Cloud Platform)

Data rights granted to system

Minimal

None

None

Some (e.g. for training AI and summary data products)

Open data requirement

No (but recommended)

No

No (but recommended)

Yes (data can be kept private for 48 months, project metadata are always public)

Media hosting

Yes

No

Yes

Yes

Export format

Camtrap DP

Custom format

Camtrap DP

Custom format

4. Publishing camera trap data

Data publication is the process of making biodiversity data open and FAIR (Wilkinson et al. 2016). Adopting the FAIR principles guarantees that your camera trap data can be found, accessed, integrated and reused (see Section 4.1) for many applications, from biological use cases (species distribution modeling, population density estimation, etc.) to providing training data for machine learning model development. We recommend the use of the GBIF Integrated Publishing Toolkit (IPT) (Robertson et al. 2014) to do so. It facilitates data publication and registration with the Global Biodiversity Information Facility (GBIF), an international network and data infrastructure for biodiversity data.

Before you publish through GBIF, you must prepare (see Section 4.2) and standardize your data. Data standardization is the transformation of data to a specific data exchange format so it becomes interoperable with other data at GBIF. GBIF supports Camtrap DP and Darwin Core Archive as the data exchange formats for camera trap data. Recommendations for these formats are provided in Section 4.3 and Section 4.4 respectively.

We strongly recommend publishing camera trap data at project level, i.e. one dataset for one project. This makes it easier to describe the scope, methodology, contributors, funding sources, etc. in the metadata. For a general overview on how to publish data to GBIF, see GBIF Secretariat (2018).

4.1. FAIR camera trap data

Imagine you need to aggregate all observations of muskrats recorded in Belgium in 2023. Doing so is hard if the data are scattered across sources and use different access protocols, field names and languages. Making these data sources FAIR means organizing them in such a way that everyone (humans and machines) can find, use, understand and combine them.

The easiest way to make a dataset findable is by providing meaningful metadata (e.g. title, description and keywords) and depositing it in a research repository (such as the IPT (Robertson et al. 2014) in combination with GBIF). Repositories provide cross-dataset search functionalities and will assign each dataset a unique identifier so that it can be uniquely referenced and accessed. Adding an open license to the dataset will allow users to access and reuse the data (in addition to the metadata), while rich metadata (e.g. methodology) will enable users to determine if the dataset is fit for their purpose. The most challenging aspect is to make a dataset interoperable so that it can easily be integrated with other data. This can be achieved by using standards: research repositories will standardize metadata and Camtrap DP can be used to standardize the data. See also Bubnicki et al. (2023) (section "Is Camtrap DP FAIR?") for more information on how Camtrap DP enables FAIR data exchange.

4.2. Preparing data

4.2.1. Stable unique identifiers

Terms like deployments.deploymentID or dwc:occurrenceID expect an identifier that is:

  • Required to be unique, i.e. it uniquely refers to a record or concept.

  • Strongly recommended to be stable/persistent, i.e. it does not change over time and can safely be referenced.

  • Recommended to be globally unique, i.e. it uniquely refers to a record or concept in a global context.

If available, we recommend using the identifier assigned by the data management system, as is. These identifiers will be unique, most likely stable, and sometimes globally unique (e.g. a UUID). They also allow users (with access) to look up the record in the data management system. We advice against appending elements to the identifier to make it globally unique, since this makes it more prone to change. Since datasets can be uniquely identified (e.g. with a DOI), it is sufficient if the identifier is unique within the dataset.

4.2.2. Sensitive information

Camera trap data may contain sensitive information, such as personal information (e.g. names or images of living persons), the occurrence of sensitive (e.g. rare or endangered) species, the locations of actively managed cameras, or even notes and comments not intended for the public. We recommend following the best practices in Chapman (2020), which favour generalization over restriction of the record as a whole.

Personal data

Personal data is any information that relates to an identified or identifiable living person. This information is subject to regulations such as GDPR. In camera trap data, personal data are the names of participants, their email addresses and the whereabouts of participants who setup the camera (identifiable by combining the name with the deployment date-time and location). In Camtrap DP, person names can appear in package.contributors, package.bibliographicCitation, deployments.setupBy and observations.classifiedBy. In a Darwin Core Archive, person names can appear in the metadata and terms like dwc:identifiedBy.

We recommend contacting participants to ask if their personal information can be made public and to anonymize (e.g. anonymized:3eb30aa) or exclude it when they prefer not to. Some data management system (such as Casaer et al. (2019)) allow users to indicate their preferences and automatically anonymize their personal data in exports. Note that it may not be possible to permanently remove personal information from older versions of an already published dataset.

Sensitive media files

Media files containing identifiable persons is a form of personal data that should be identified and kept private in order to protect the privacy of the persons. The same may be necessary for media files containing vehicles or picturing camera setup. Media files containing sensitive species may need to be kept private if they allow to identify the location.

We recommend providing the URL to all media files (in media.filePath or ac:accessURI ), but regulating its access at the provider level (e.g. https://multimedia.agouti.eu/assets/813bafb2-befe-45fa-b0e3-080f1f019a70/file). The expected access can be described in media.filePublic or ac:serviceExpectation. Note that in a Darwin Core Archive, observations (and media) of humans, vehicles, setup, etc. are typically excluded.

Sensitive location information

Camera trap data may contain location information of sensitive species. Locations of actively managed cameras can also be sensitive to vandalism and theft. We recommended following Chapman (2020) to determine sensitivity (Chapter 2) and choose an appropriate generalization.

Note that camera trap location information is often the same across multiple deployments. Generalizing the coordinates of the deployment with associated sensitive information is likely not sufficient to prevent correlational analyses. A deployments.locationID shared by multiple deployments for example can lead to deductions of localities/records that are generalized. Make sure to generalize the coordinates for the location across deployments.

Whatever the selected level of generalization, document it in the dataset metadata and appropriate terms, so that users are aware. See Table 12 for an example.

Table 12. How to express non-generalized vs generalized location information in Camtrap DP and Darwin Core. The generalized example assumes a sensitivity of Category 4. The coordinate uncertainty of 187 meter is the sum of the GPS precision (30 m) and maximum uncertainty associated with coordinates that have a precision of 0.001 degree (157 m).
Camtrap DP term Darwin Core term Non-generalized Generalized

deployments.latitude

dwc:decimalLatitude

51.18061

51.181

deployments.longitude

dwc:decimalLongitude

5.65490

5.655

implied

dwc:geodeticDatum

EPSG:4326

EPSG:4326

deployments.coordinateUncertainty

dwc:coordinateUncertaintyInMeters

30

187

package.coordinatePrecision

dwc:coordinatePrecision

0.00001

0.001

dwc:georeferenceRemarks

source assumed to be GPS, uncertainty defaulted to 30 m

source assumed to be GPS, uncertainty defaulted to 30 m

dwc:dataGeneralizations

coordinates rounded to 0.001 degree

Other sensitive information

Text fields such as comments and notes (e.g. deployments.deploymentComments or dwc:occurrenceRemarks) may contain sensitive information such as person names, sensitive location information or information not intended for the public. We recommend verifying values and generalizing where necessary (see Chapter 3 in Chapman (2020)).

4.3. Camtrap DP

We recommend the use of Camera Trap Data Package (Camtrap DP) to publish camera trap data. It is specifically designed for this type of data and can retain more information than a Darwin Core Archive (Bubnicki et al. 2023). Some data management systems directly support it as an export format (see Table 11), reducing the need for data transformation when publishing through GBIF.

See the Camtrap DP website for term definitions, recommendations and examples.

At the time of writing, GBIF does not yet support the publication of Camtrap DP in their production environment. It will be released as a feature in version 3 of the Integrated Publishing Toolkit (IPT).

Not all information in a published Camtrap DP is currently harvested by GBIF. The GBIF data model requires it to be transformed to Darwin Core before ingestion. This process is provided by the write_dwc() function in the R software package camtraptor (Oldoni et al. 2023). This function implements the recommendations suggested in this document. GBIF will be able to process more information from a published Camtrap DP once it has implemented a new data model (GBIF Secretariat 2022).

4.4. Darwin Core Archive

4.4.1. Why not a sampling event dataset?

With their hierarchical events (deployments, sequences) and resulting observations, it seems logical to express camera trap data as Sampling-event data with an Event core (see Table 13) and an Occurrence extension (see Table 14). It allows us to provide detailed (though repeated) information about each type of event and offers the possibility to add a Measurement Or Facts extension with alignment and other information (mostly relevant for the deployments).

It unfortunately also impedes us from expressing information about the media as an extension, since the star schema design of a Darwin Core Archive does not allow to relate the Occurrence extension with an Audubon Media Description extension. It is technically possible to link the Audubon Media Description extension with the Event core, but the media would then not be linked to the occurrences and not appear on occurrence pages at GBIF.org. The only available option to express information about the media at an occurrence level would be to use dwc:associatedMedia, which would reduce it to a (list of) URL(s). License, media type, capture method, bounding boxes, etc. cannot be provided.

Table 13. Event core with camera trap data. It contains three types of events: one deployment (with a duration of days), one sequence (with a duration of seconds) and two media-based events (with a single timestamp). Note that location information is the same for all events. Source.
eventType eventID parentEventID eventDate Location information

deployment

00a2c20d

2020-05-30T02:57:37Z/ 2020-07-01T09:41:41Z

51.496, 4.774

sequence

79204343

00a2c20d

2020-06-12T04:04:29Z/ 2020-06-12T04:04:55Z

51.496, 4.774

media

e68deaed

79204343

2020-06-12T04:04:29Z

51.496, 4.774

media

c5efbcb3

79204343

2020-06-12T04:04:30Z

51.496, 4.774

Table 14. Occurrence extension with camera trap data. It contains three observations: two media-based classifications of Anas platyrhynchos and one event-based classification of Ardea cinerea. Information about the media files can only be provided in dwc:associatedMedia. Source.
occurrenceID eventID scientificName associatedMedia

e68deaed_2

e68deaed

Anas platyrhynchos

https://multimedia.agouti.eu/assets/e68deaed-a64e-4999-87a3-9aa0edf5970d/file

c5efbcb3_2

c5efbcb3

Anas platyrhynchos

https://multimedia.agouti.eu/assets/c5efbcb3-34f5-4a59-bc15-034e01b05475/file

05230014

79204343

Ardea cinerea

https://multimedia.agouti.eu/assets/e68deaed-a64e-4999-87a3-9aa0edf5970d/file | https://multimedia.agouti.eu/assets/c5efbcb3-34f5-4a59-bc15-034e01b05475/file

We therefore recommend expressing camera trap data as an Occurrence dataset with an Occurrence core and an Audubon Media Description extension (see Table 15 and Table 16). This treats media as primary data records, which is important given that they are the evidence on which the observations are based. Event hierarchy can largely be retained as well, since the Occurrence core allows to group occurrences into events (dwc:eventID) and parent events (dwc:parentEventID). By providing the event/sequence identifier in dwc:eventID and deployment identifier in dwc:parentEventID, observations can be grouped just like they would in an Event core and GBIF.org will automatically create event pages for those (see Figure 5). Event duration information however cannot be provided, but eventDate and samplingEffort can retain most of it. Information about the deployment location, habitat, sampling protocol, etc. is repeated for every observation in the deployment.

Term recommendations for the Occurrence core and Audubon Media Description extension are provided in Section 4.4.2 and Section 4.4.3 respectively.

Table 15. Occurrence core with camera trap data. It contains the same three observations as in Table 14. The event/sequence identifier is provided in dwc:eventID, the deployment identifier in dwc:parentEventID. Source.
occurrenceID eventID parentEventID scientificName eventDate Location information

e68deaed_2

79204343

00a2c20d

Anas platyrhynchos

2020-06-12T04:04:29Z

51.496, 4.774

c5efbcb3_2

79204343

00a2c20d

Anas platyrhynchos

2020-06-12T04:04:30Z

51.496, 4.774

05230014

79204343

00a2c20d

Ardea cinerea

2020-06-12T04:04:29Z/ 2020-06-12T04:04:55Z

51.496, 4.774

Table 16. Audubon Media Description extension with camera trap data. It contains the same two media files as referenced in Table 14, but now allows to share more information per file. Source.
observationID identifier accessURI CreateDate captureDevice rights

e68deaed_2

e68deaed

https://multimedia.agouti.eu/assets/c5efbcb3-34f5-4a59-bc15-034e01b05475/file

2020-06-12T04:04:29Z

Reconyx-HF2X

https://creativecommons.org/licenses/by/4.0/legalcode

c5efbcb3_2

c5efbcb3

https://multimedia.agouti.eu/assets/c5efbcb3-34f5-4a59-bc15-034e01b05475/file

2020-06-12T04:04:30Z

Reconyx-HF2X

https://creativecommons.org/licenses/by/4.0/legalcode

05230014

e68deaed

https://multimedia.agouti.eu/assets/c5efbcb3-34f5-4a59-bc15-034e01b05475/file

2020-06-12T04:04:29Z

Reconyx-HF2X

https://creativecommons.org/licenses/by/4.0/legalcode

05230014

c5efbcb3

https://multimedia.agouti.eu/assets/c5efbcb3-34f5-4a59-bc15-034e01b05475/file

2020-06-12T04:04:30Z

Reconyx-HF2X

https://creativecommons.org/licenses/by/4.0/legalcode

example event page
Figure 5. Screenshot of an event page created by GBIF.org from information provided in an Occurrence core (based on row 3 in Table 15). Notice the event ID (a sequence) and parent event ID (a deployment).

4.4.2. Occurrence core

As described above, we recommend to use of an Occurrence core for expressing camera trap data as a Darwin Core Archive. See Table 17 for term recommendations. These recommendations align with the GBIF quality requirements for Occurrence datasets (GBIF Secretariat 2020) and use the same terminology (Required, Strongly recommended, Share if available).

Note that the Occurrence core should only contain animal observations, so classifications of blanks, vehicles and preferably humans should be filtered out. The number of records will depend on the size of the study, the classification effort (are all media classified?), the classification precision (see Table 9) and whether media- or event-based classification was used. Especially media-based classifications can substantially increase the number of occurrences, with little added benefit for ecological research. Camtrap DP is designed for both, but when publishing as a Darwin Core Archive, we recommend only providing event-based observations if available.

Table 17. Recommended terms to use when expressing camera trap data as an Occurrence core. Source.
Term Status Example value

type

Share if available

StillImage

license

Share if available

https://creativecommons.org/publicdomain/zero/1.0/legalcode

rightsHolder

Share if available

INBO

datasetID

Share if available

7cca70f5-ef8c-4f86-85fb-8f070937d7ab

collectionCode

Share if available

Agouti

datasetName

Share if available

Sample from: MICA - Muskrat and coypu camera trap observations in Belgium, the Netherlands and Germany

basisOfRecord

Required

MachineObservation

dataGeneralizations

Share if available

coordinates rounded to 0.001 degree

occurrenceID

Required

05230014

individualCount

Strongly recommended

1

sex

Share if available

lifeStage

Share if available

adult

behavior

Share if available

occurrenceStatus

Strongly recommended

present

occurrenceRemarks

Share if available

organismID

Share if available

eventID

Strongly recommended

79204343

parentEventID

Strongly recommended

00a2c20d

eventDate

Required

2020-06-12T04:04:29Z/2020-06-12T04:04:55Z

habitat

Share if available

Campine area with a number of river valleys with valuable grasslands

samplingProtocol

Strongly recommended

camera trap

samplingEffort

Share if available

2020-05-30T02:57:37Z/2020-07-01T09:41:41Z

eventRemarks

Share if available

camera trap without bait near game trail | tags: position:above stream

locationID

Share if available

e254a13c

locality

Share if available

B_HS_val 2_processiepark

minimumDepthInMeters

Share if available

maximumDepthInMeters

Share if available

minimumDistanceAboveSurfaceInMeters

Share if available

1.30

maximumDistanceAboveSurfaceInMeters

Share if available

1.30

decimalLatitude

Strongly recommended

51.496

decimalLongitude

Strongly recommended

4.774

geodeticDatum

Strongly recommended

EPSG:4326

coordinateUncertaintyInMeters

Strongly recommended

187

coordinatePrecision

Share if available

0.001

identifiedBy

Share if available

Peter Desmet

dateIdentified

Share if available

2023-02-02T13:57:58Z

identificationRemarks

Share if available

classified by a human

taxonID

Share if available

https://www.checklistbank.org/dataset/COL2023/taxon/GCHS

scientificName

Required

Ardea cinerea

kingdom

Strongly recommended

Animalia

type

The nature of the resource. Use StillImage if the record is based on an image or sequence of images, MovingImage if based on a video. One can also use the broader term Image for all records.

license

The licence under which the data record is shared. Very likely this will be the same licence as the one used for the dataset as a whole, but it is possible to deviate (Waller 2020). To enable wide use, we recommend publishing data under a Creative Commons Zero waiver and to provide it as a URL: https://creativecommons.org/publicdomain/zero/1.0/legalcode. In Camtrap DP, this term corresponds with the path of the licence that has the scope data in package.licenses, although there it is specified for the dataset as whole, rather than per record.

rightsHolder

The person or organization (i.e. participant) owning or managing rights over the resource. In all likeness the organization that decided under what license the data are published and/or the publisher of the data (i.e. the organization selected as publisher when registering a dataset with GBIF). Use an acronym if the organization has one. In Camtrap DP, this term corresponds with the title of the collaborator that has the role rightsHolder in package.contributors.

datasetID & datasetName

Respectively the identifier and name of the dataset. For dwc:datasetID we recommend using a stable URL or identifier that allows users to find information about the source dataset/study. In order of preference: dataset DOI (https://doi.org/10.15468/5tb6ze), study URL (http://n2t.net/ark:/63614/w12001317), or study identifier used by the data management system. In Camtrap DP, this term corresponds with package.id, unless a better identifier is available (e.g. a DOI). dwc:datasetName should refer to the title of the dataset/study as referred to by dwc:datasetID. We recommend using the same value for the title in the metadata. In Camtrap DP, this term corresponds with package.title.

collectionCode

The name or acronym identifying the collection or dataset the record was derived from. Traditionally used to indicate a physical collection, we recommend to provide the name of the data management system (i.e. virtual collection) the record was derived from. This allows users to search for records from the same data management system across datasets. Recommended values: Agouti, Camelot, eMammal, Trapper, Wildlife Insights, etc. In Camtrap DP, this term corresponds with the title of the (applicable) source in package.sources.

basisOfRecord

The specific nature of the record. Set to MachineObservation for all records. While humans decide when and were to deploy a camera trap, and humans or machines (AI) can classify media, the capturing of the record is done by a machine responding to a sensor. This is critically different from human observations, where a human is actively in control of the decision whether to record an organism or not.

dataGeneralizations

The actions taken to make the published data less specific or complete than in its original form. We recommend succinctly describing here what sensitive information of the record was generalized and how. Note that this information can be provided at record level and does not need to apply to the whole dataset. If important information was omitted altogether, use dwc:informationWithheld.

Examples:

coordinates rounded to 0.001 degree
scientific name generalized to genus
occurrenceID

An identifier for the observation. Use a stable unique identifier. In Camtrap DP, this term corresponds with observations.observationID.

individualCount

The number of observed individuals. Note that this number is dependent on the precision of the identifications. In Camtrap DP, this term corresponds with observations.count.

sex

The sex of the observed individual(s). We recommend using the controlled values male and female, which are based on Camtrap DP and compatible with the GBIF Sex vocabulary. In Camtrap DP, this term corresponds with observations.sex.

lifeStage

The life stage of the observed individual(s). We recommend using the controlled values adult, subadult, and juvenile, which are based on Camtrap DP and compatible with the GBIF LifeStage vocabulary. In Camtrap DP, this term corresponds with observations.lifeStage.

behavior

The dominant behaviour of the observed individual(s). We recommend using existing or your own controlled values (e.g. grazing, browsing, rooting, vigilance, running, walking). In Camtrap DP, this term corresponds with observations.behavior.

occurrenceStatus

A statement about the presence or absence of the taxon at a location. When reduced to species observations (filtering out blanks, etc.), camera trap data only contain presence records. Set to present for all records.

occurrenceRemarks

The comments or notes about the observation. These are typically notes (sometimes in the native language of the author) about the observation and/or observed individual(s) that were not or could not be recorded in another field. This information is potentially useful to publish, but may contain sensitive information. In Camtrap DP, this term corresponds with observations.observationComments.

organismID

An identifier for an observed and known individual that was recognized by colour ring, ear tag, skin pattern or other characteristics. Observations with dwc:organismID typically have dwc:individualCount of 1, unless the dwc:organismID refers to a known group. Unless a globally unique identifier is available and known for the individual, we recommend using the code/identifier assigned within the camera trap study to the individual, allowing users to find all observations of this individual within the dataset. In Camtrap DP, this term corresponds with observations.individualID.

eventID

An identifier for the event the observation belongs to. We recommend providing the identifier for the event (typically a sequence) as used for event-based classification. Using an Occurrence core, events will not have their own records, but providing their identifier in dwc:eventID allows users to find all observations (and media) for a specific event. Use a stable unique identifier. Note that GBIF.org will automatically group observations with the same dwc:eventID as belonging together. In Camtrap DP, this term corresponds with observations.eventID.

parentEventID

An identifier for a broader event then those identified by eventID. We recommend providing the identifier of the deployment. Using an Occurrence core, deployments will not have their own records, but providing their identifier in dwc:parentEventID allows users to find all observations (and media) for a specific deployment. Use a stable unique identifier. Note that GBIF.org will automatically group observations with the same dwc:parentEventID as belonging together. In Camtrap DP, this term corresponds with observations.deploymentID.

eventDate

The date, date-time or date-time interval during which the event occurred. We recommend using a single timestamp for media-based classifications and an interval - consisting of the timestamps of the start and end of the event as identified by eventID for event-based classifications. Write timestamps in the ISO 8601 format (YYYY-MM-DDTHH:MM:SS), use / to indicate an interval and include the timezone (+02:00) or convert and indicate as UTC (Z). In Camtrap DP, this term corresponds with observations.eventStart and observations.eventEnd, or observations.eventStart if both are equal.

Examples:

2020-07-29T05:38:55Z/2020-07-29T05:39:00Z
2020-07-29T05:38:55Z
2020-07-29T07:38:55+02:00
habitat

A category or description of the habitat in which the event occurred. This is typically the habitat at the time of deployment, with values repeated for all records of this deployment. Values can be controlled, ideally using an existing classification system, or free-text descriptions. In Camtrap DP, this term corresponds with deployments.habitat.

samplingProtocol

The method(s) or protocol(s) used during the event. We recommend using the controlled value camera trap. This allows users to search for records with this protocol across datasets.

samplingEffort

The amount of effort expanded during the event. We recommend providing the date-time interval the camera trap was deployed, using the same formatting conventions as eventDate. In Camtrap DP, this term corresponds with deployments.deploymentStart and deployments.deploymentEnd.

eventRemarks

The comments or notes about the event. These are typically notes (sometimes in the native language of the author) about the deployment that were not or could not be recorded in another field. This information is potentially useful to publish, but may contain sensitive information. We also recommend this term for providing other (structured) information associated with the deployment, such as bait use, feature type or tags, as pipe (|) separated values. In Camtrap DP, this term corresponds with deployments.deploymentComments and relates to deployments.baitUse, deployments.featureType and deployments.deploymentTags.

Examples:

camera trap with bait near burrow
camera trap without bait | tags: position:above stream
camera malfunction on 29/06/2020
locationID

An identifier for the location. This identifier allows users to find all observations (and media) for a specific location (across deployments). Use a stable unique identifier. In Camtrap DP, this term corresponds with deployments.locationID.

locality

The name of the location. This is typically a name or code assigned within the camera trap study. In Camtrap DP, this term corresponds with deployments.locationName.

minimumDepthInMeters & maximumDepthInMeters

The depth (in meters) below the local surface. For (marine) camera trap studies, this is the depth at which the camera is deployed. We recommend providing either a camera depth or camera height, not both. In Camtrap DP, this term corresponds with deployments.cameraDepth.

minimumDistanceAboveSurfaceInMeters & maximumDistanceAboveSurfaceInMeters

The height (in meters) above a reference surface. For camera trap studies, this is the height at which the camera is deployed. We recommend providing either a camera depth or camera height, not both. In Camtrap DP, this term corresponds with deployments.cameraHeight.

decimalLatitude & decimalLongitude

The geographic latitude and longitude of the location, in decimal degrees. Latitude values lie between -90 and 90, longitude values between -180 and 180. For camera trap studies, these are typically obtained by GPS and recorded in the data management system. We recommend providing the coordinates as stored in the data management system, unless they need to be rounded/generalization to protect sensitive information. In Camtrap DP, these terms correspond with deployments.latitude and deployments.longitude respectively.

geodeticDatum

The spatial reference system used for the geographic coordinates. For coordinates obtained by GPS this is typically EPSG:4326 (i.e. WGS84) (Chapman and Wieczorek 2020). In Camtrap DP, WGS84 is implied for the terms deployments.latitude and deployments.longitude.

coordinateUncertaintyInMeters

The horizontal distance (in metres) from the geographic coordinates describing the smallest circle containing the location. We recommend 30 meters as reasonable lower limit for coordinates obtained by GPS, but see Section 3.4.2 for details on what elements contribute to the uncertainty. Generalized/rounded coordinates in particular will increase the dwc:coordinateUncertaintyInMeters. In Camtrap DP, this term corresponds with deployments.coordinateUncertainty.

coordinatePrecision

The decimal precision of the geographic coordinates>, if known. This information is known and we recommend providing it for generalized/rounded coordinates (e.g. 0.001 for coordinates that were rounded to 3 decimals). In Camtrap DP, this term corresponds with package.coordinatePrecision, although there it is specified for the dataset as whole, rather than per record.

identifiedBy

The person or species classification model that identified the observed individual(s) and assigned the scientificName. We recommend providing a single name: that of the person or model that made the (most recent) classification. Although classifying can be broader than assigning a scientific name, it is likely to involve that aspect for animal observations. Note that this term contains personal data. In Camtrap DP, this term corresponds with observations.classifiedBy.

Examples:

Peter Desmet
Western Europe species model Version 1
anonymized:3eb30aa
dateIdentified

The date or date-time on which the identification was made. We recommend providing a single timestamp: that of the classification made by the person or model indicated in identifiedBy. This information is typically recorded by the data management system. Write timestamps in the ISO 8601 format (YYYY-MM-DDTHH:MM:SS) and include the timezone (+02:00) or convert and indicate as UTC (Z). In Camtrap DP, this term corresponds with observations.classificationTimestamp.

identificationRemarks

The comments or notes about the identification. We recommend using this term to provide information on whether the classification was made by a human or species classification model as well as the degree of certainty if available (often recorded for AI classification). In Camtrap DP, this term relates to observations.classificationMethod and observations.classificationProbability.

Examples:

classified by a human
classified by a machine with a degree of certainty of 89%
taxonID

An identifier for scientificName. This identifier allows users to find all observations (and media) for a specific taxon. Use a stable unique identifier, preferably one assigned by an authorative source. In Camtrap DP, this term corresponds with the taxonID of the corresponding taxon in package.taxonomic.

scientificName

The scientific name of the observed individual(s). In Camtrap DP, this term corresponds with observations.scientificName.

kingdom

The kingdom in which the taxon with the scientificName is classified. It allows services like GBIF’s species name matching to disambiguate between homonyms. Most likely Animalia for all records, since camera trap data almost never contain classifications of plants, fungi or other kingdoms.

4.4.3. Audubon Media Description extension

As described above, we recommend to use of an Audubon Media Description extension for expressing camera trap data as a Darwin Core Archive. See Table 18 for term recommendations.

Note that the Audubon Media Description extension can contain duplicates, an important difference with Camtrap DP’s media where each file is only listed once. Repeated occurrenceID are the result of a single event-based observation being related to multiple media files (e.g. observation 05230014 in Table 16). Repeated identifiers are the result of a media file being the source for multiple observations (e.g. multiple species observed in the same image, such as in media file e68deaed in Table 16). The extension should however contain unique occurrenceID+identifier combinations.

Table 18. Recommended terms to use when expressing camera trap data as an Audubon Media Description extension. Source.
Term Status Example value

occurrenceID

Required

05230014

identifier

Share if available

6d65f3e4

type

Share if available

StillImage

comments

Share if available

marked as favourite

rights

Strongly recommended

https://creativecommons.org/licenses/by/4.0/legalcode

CreateDate

Share if available

2020-06-12T06:04:32+02:00

captureDevice

Share if available

Reconyx-HF2X

resourceCreationTechnique

Share if available

motion detection

accessURI

Required

https://multimedia.agouti.eu/assets/6d65f3e4-4770-407b-b2bf-878983bf9872/file

format

Share if available

image/jpeg

serviceExpectation

Share if available

online

occurrenceID

A foreign key to the occurrenceID in the Occurrence core, to indicate the relation between the observation and the media file(s) on which it is based. This term can contain duplicates, as this is a many-to-many relationship (see note in Section 4.4.3). In Camtrap DP, this term corresponds with observations.observationID, but the relationship between observations and media can be established in several ways: either directly via observations.mediaID or by selecting media that have the same media.deploymentID as the observation and a media.timestamp that falls between the observations.eventStart and observations.eventEnd of the observation.

identifier

An identifier for the media file. Use a stable unique identifier. This term can contain duplicates, as this is a many-to-many relationship (see note in Section 4.4.3). In Camtrap DP, this term corresponds with media.mediaID.

type

The nature of the resource. Use StillImage for images, MovingImage for videos. Do not use dcterms:type, because that term expects a URL value.

comments

The comments or notes about the media file. In contrast with eventRemarks and occurrenceRemarks, notes about the media files themselves are seldom recorded in data management systems. The term could be used to indicate if a media file was marked as favourite or noteworthy. In Camtrap DP, this term corresponds with media.mediaComments and relates to media.favorite.

rights

The licence under which the media file is shared. Note that this applies to file referenced in accessURI, not the data in the Audubon Media Description extension (these fall under the dataset license). We recommend using the same license for all media files. To enable wide use, we recommend publishing media files under a Creative Commons Zero waiver or Creative Commons Attribution 4.0 International license and to provide it as a URL: https://creativecommons.org/publicdomain/zero/1.0/legalcode or https://creativecommons.org/licenses/by/4.0/legalcode respectively. Do not use dc:rights, because that term expects a literal value (the full-text copyright statement). In Camtrap DP, this term corresponds with the path of the licence that has the scope media in package.licenses, although there it is specified for the dataset as whole, rather than per record.

CreateDate

The date-time on which the media file was created. This information is typically extracted from the EXIF metadata by the data management system. Write timestamps in the ISO 8601 format (YYYY-MM-DDTHH:MM:SS) and include the timezone (+02:00) or convert and indicate as UTC (Z). In Camtrap DP, this term corresponds with media.timestamp.

captureDevice

The device(s) used to create the media file. We recommend providing the camera make and model (e.g. Reconyx-HF2X). In Camtrap DP, this term corresponds with deployments.cameraModel.

resourceCreationTechnique

The method(s) used to create or alter the media file. We recommend using this term to provide the trigger method that was used to capture the media file, as controlled values: activity detection or time lapse. In Camtrap DP, this term corresponds with media.captureMethod.

accessURI

The URI (Uniform Resource Identifier) that provides access to the media file. Although the term allows to point to relative file paths or offline storage, we strongly recommend to provide the http/https URL that serves the media file, if available (see Section 3.3.3). Use a http/https URL that serves the media file directly (not a HTML page embedding it), so it can be displayed on occurrence pages at GBIF.org. Camera trap images are typically small enough that it is not necessary to serve a reduced version of the file. In Camtrap DP, this term corresponds with media.filePath.

serviceExpectation

The service expectations users may have of the accessURI. We recommend using the controlled values online for media files that are publicly accessible over http/https and authenticate for media files that are kept private over http/https (see Section 4.2.2.2). In Camtrap DP, these values related to TRUE and FALSE respectively in media.filePublic.

format

The file format of the media file. We recommend providing the media type (MIME type) using the controlled values image/jpeg, video/mp4 or video/mpeg of the Audiovisual Core Controlled Vocabulary for Dublin Core. Do not use dcterms:format, because that term expects a URL value. In Camtrap DP, this term corresponds with media.fileMediatype.

Acknowledgements

We thank the following people for making suggestions to this guide and/or providing technical support: Stephen Formel, Donald Hobern, Kyle Copas, Laura Russell, Matt Blissett and members of the Camtrap DP Development Team.

Glossary

activity

Movement in front of a camera. Movement can cause a trigger when certain conditions are met (e.g. within the detection zone and outside the quiet period). Camera traps are typically deployed to capture wildlife activity, but may also record movements of humans, vehicles or vegetation.

alignment

Physical placement of a camera. Consists of camera height, camera depth, camera tilt and camera heading.

artificial intelligence

Ability to perceive, synthesize and infer information as demonstrated by machines, as opposed to humans/animals. In camera trap research, AI is mainly applied as machine learning to perform computer vision tasks. See also Section 3.5.3.

bait

Attractant used to encourage animals to investigate a location or specific point within the detection zone. Baits may be auditory, olfactory, visual, or some combination of these in nature. Whether bait was used for a deployment is useful to know for analyses, as it can alter the natural behaviour of animals. Can be expressed as a categorical description (e.g. acoustic, visual, scent) or a boolean. Also referred to as lure (Meek et al. 2014). deployments.baitUse in Camtrap DP.

blank

Media without objects of interest. Blanks are typically the result of false triggers such as moving vegetation or fluctuating light. Marked as such when classifying to facilitate excluding them from queries.

camera

Device designed to automatically capture media of (wildlife) activity, typically triggered by a combination of heat and motion. Of a certain make and model and uniquely identifiable by e.g. by serial number. Also referred to as camera trap, game camera, trail camera, scouting camera or device.

camera depth

Depth of the (underwater) camera, a component of camera alignment. Typically expressed in meters below the local water surface. deployments.cameraDepth in Camtrap DP.

camera height

Height of the camera above the ground, a component of camera alignment. Can be expressed in meters above the ground or as a categorical description (knee height ~ 0.5m, chest height ~ 1.5m, canopy ~ 3+m in Ahumada et al. (2020)). deployments.cameraHeight in Camtrap DP.

camera tilt

Up or down orientation of the camera, a component of camera alignment. Can be expressed in degrees or as a categorical description (parallel = 0°, pointed downward = -90° in Ahumada et al. (2020)). deployments.cameraTilt in Camtrap DP.

camera heading

Horizontal cardinal orientation of the camera, a component of camera alignment. Can be expressed in degrees from North or as cardinal directions (N, NW, etc.). deployments.cameraHeading in Camtrap DP.

Camtrap DP

Camera Trap Data Package. A community developed data exchange format for camera trap data, maintained under Biodiversity Information Standards (TDWG). See also Section 4.3, Bubnicki et al. (2023) and the Camtrap DP website.

classification

The act of classifying camera trap media, resulting in observations. Not to be confused with taxonomic classification. Can be performed in different steps and with different levels of precision, e.g. 1. media does/does not contain object(s) of interest (i.e. blank), 2. object of interest is human/vehicle/animal or unknown, 3. animal is member of certain taxon (e.g. Sus scrofa, Rodentia), 4. animal is of certain sex/life stage, is known individual x or shows certain behaviour. Classification is typically labour intensive and therefore often aided by computer vision, volunteers and/or classifying sequences rather than individual media files. Also referred to as image classification, annotation or identification. See also Section 3.5.1.

citizen science

Scientific research conducted with participation from the public. Also referred to as community science, crowd science, crowdsourcing, or volunteer monitoring. In camera trap research, citizen scientists can participate in camera trap deployment and classification. See also Section 3.5.2.

cloud computing

Performing computing tasks on a distributed IT infrastructure (“cloud service”). Typically at a cost (“pay as you go”) in return for better performance and less maintenance.

computer vision

Processing, analysing and understanding of media by machines to aid classification, from object tracking to species identification. A form of artificial intelligence, typically trained with machine learning.

covariates

Ecological variables that may affect the behaviour and detection of animals (e.g. bait use, feature type, habitat, canopy cover). Recording covariates is important for further analysis of the data. See also Section 3.4.5.

data exchange format

Format used to exchange data between systems (e.g. Camtrap DP and Darwin Core Archive). Requires data transformation from the source system to the format and from the format to the target system. Well designed data exchange formats facilitate FAIR data exchange, use open formats and provide clear definitions. Also referred to as data standard (when approved through a ratification process).

data management system

Online or desktop application to manage camera trap data. Typically includes functionality to upload media, add deployment information, classify images, export data, invite participants and manage a project. See also Section 3.6.

data repository

Online system for the long-term archival of (research) data (e.g. Zenodo, Dryad and the GBIF IPT). Different from a data management system which is designed for the management (and not necessarily long-term archival) of data.

Darwin Core Archive

Standardized and widely supported data exchange format for biodiversity data, maintained by Biodiversity Information Standards (TDWG). See also Section 4.4 and the Darwin Core text guide.

deployment

Spatial and temporal placement of a camera to sample wildlife images. A camera placed at a location between 1 and 15 January 2020 is a different deployment than the same (or different) camera placed at the same location between 15 and 30 January 2020. Deployments end by removing or replacing the camera, changing their position or swapping their memory card. Also referred to as sampling point (Wearn and Glover-Kapfer 2017), trap station session (Hendry and Mann 2018) or visit (Newkirk 2016). deployments in Camtrap DP. See also Section 3.4.

deployment group

Logical grouping of deployments, based on spatial, temporal or thematic criteria. A deployment can belong to multiple deployment groups. array (O’Connor et al. 2017), camera trap array (Meek et al. 2014), cluster (Resources Information Standards Committee RISC 2019), paired deployment (Kolowski and Forrester 2017), site and strata (Sun et al. 2021) are spatial deployment groups. sampling campaign (Lamelas-Lopez et al. 2020), sampling event (Fegraus et al. 2011) and session are temporal deployment groups. subproject (eMammal n.d.), survey (Resources Information Standards Committee RISC 2019; Tobler 2015) and tags are thematic deployment groups. study areas (Newkirk 2016) can be considered a deployment group or project in its own right. deployments.deploymentGroups in Camtrap DP. See also Section 3.4.4.

detection distance

Furthest distance in the detection zone at which the camera detects activity. deployments.detectionDistance in Camtrap DP.

detection zone

Area of a location in which a camera sensor is able to detect activity.

event

Action that occurs at a specific location for a specific duration. In camera trap research, events typically refer to animal activity recorded through one or more triggers and forming a sequence, but other definitions might be used when analysing data. Events can be indicated with observations.eventID, observations.eventStart and observations.eventEnd in Camtrap DP. In a Darwin Core Archive, deployments can also be considered events.

EXIF

Exchangeable Image File Format. A format for storing metadata about a media file (e.g. creation date and time, format, resolution, shutter speed, exposure level, camera model), typically stored as part of the media file. media.exifData in Camtrap DP.

FAIR

FAIR (meta)data are (meta)data that meet the principles of findability, accessibility, interoperability and reusability. The FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. See Wilkinson et al. (2016).

feature type

Categorical description of a particular physical feature targeted during the deployment, such as burrow, nest site, or water source. deployments.featureType in Camtrap DP.

file path

String describing the location of a file in a storage system (e.g. data/deployments.csv). When served over http/https, the domain name and file path constitute the file URL (e.g. https://raw.githubusercontent.com/tdwg/camtrap-dp/main/example/deployments.csv).

GDPR

General Data Protection Regulation. A European Union regulation on information privacy, designed to enhance individuals' control and rights over their personal information. See Section 4.2.2.1.

habitat type

Categorical description of the environment and vegetation of a location. Classification systems exist to express habitat (European Environment Agency EEA 2021; IUCN 2019) or vegetation type (Vegetation Subcommittee 2016). deployments.habitat in Camtrap DP.

image

Static media file recorded by a camera. Has no significant duration or audio.

independence interval

Minimum duration between consecutive triggers to be considered belonging to separate sequences. This duration (e.g. 120 seconds) can be defined in a data management system to automatically group media into sequences. This is different from the quiet period, which is a camera setting.

individual

Distinct organism, typically an animal.

location

Physical place where a deployed camera is located. A location can be described with a name and/or identifier and coordinates in a certain reference system (e.g. decimal latitude and longitude in WGS84). Also referred to as camera location (Newkirk 2016), station (Van Berkel 2014; Zaragozí et al. 2015), project station (WildCAM 2018) or trap station (Hendry and Mann 2018). Deployment location with a deployments.locationName, deployments.locationID, deployments.latitude, deployments.longitude, and deployments.coordinateUncertainty in Camtrap DP. See also Section 3.4.2.

machine learning

Computational technique that makes use of (training) data and algorithms to imitate the way that humans learn, gradually improving accuracy.

media

Media files (plural) captured by a camera. Also referred to as photos (Newkirk 2016). media in Camtrap DP.

media file

A (audio)visual file captured by a camera. Can be an image or video. A media file typically has an identifier, file name, timestamp when it was created and associated metadata (e.g. EXIF). To access a media file, one needs to know its file path and have the required access rights. Media with media.mediaID, media.timestamp, media.fileName, media.filePath in Camtrap DP. See also Section 3.3.

media type

Standardized expression of a file format (e.g. image/jpeg for an image). Formerly known as MIME type. media.fileMediatype in Camtrap DP.

observation

Result of a classification, i.e. a record of what can be seen or heard on media-files. Has an observation type to differentiate between animal and other observations. observations in Camtrap DP. See also Section 3.5.

observation type

Categorical description of the type of observation. Recorded as part of the classification, allowing to differentiate between blanks, observations of humans or vehicles and animal observations. observations.observationType in Camtrap DP.

organization

Entity comprising one or more people that share a particular purpose, such as a company, institution, association or partnership. Organizations can be directly associated with a project (e.g. as rights holder, publisher) or indirectly via the affiliation of the project participants. An organization is a package.contributors in Camtrap DP.

participant

Person associated with a project, performing out one or more roles. Participant information typically includes name and contact information and is subject to GDPR. Organizations can also be considered participants. Also referred to as contributor, sometimes user. A participant is a package.contributors in Camtrap DP. See also Section 3.2.1.

role

Function carried out by a participant in a project, such as project lead, data manager or volunteer classifying media. Participants can have multiple roles and roles are typically associated with different rights in a data management system (e.g. the right to invite new participants). Also referred to as participant type. package.contributors.role in Camtrap DP. See also Section 3.2.1.

project

Scientific investigation by a number of participants, with a defined objective, methodology, and taxonomical, spatial and temporal scope. The objective of camera trap projects is typically to study and understand wildlife. Also referred to as study. package.project in Camtrap DP, where a dataset is associated with one and only one project. See also Section 3.2.

quiet period

Predefined duration after a trigger when activity detected by the camera sensor is ignored. deployments.cameraDelay in Camtrap DP.

sampling design

Strategy for deploying cameras to facilitate a certain research purpose. Can be expressed as a categorical description (e.g. simple random, systematic random, opportunistic). package.project.samplingDesign in Camtrap DP.

sensitivity

trigger sensitivity setting used on a camera sensor.

sensor

Device that detects changes in the environment, such as movement, heat, light, sound, or other stimuli. Modern camera traps typically use an integrated passive infrared (PIR) sensor that is designed to detect activity based on a combination of heat and motion.

sequence

Series of media files taken in rapid succession but separated by a time interval less than the set independence interval and forming an animated record of an event. Also referred to as series (Bayne et al. 2018).

setup

The act of deploying a camera in the field. Involves alignment, defining the camera settings and securing the camera to ensure optimal data captures. observations.cameraSetupType in Camtrap DP.

site

Geographic area containing multiple locations.

species recognition

Automated identification and classification of different animal species based on visual or auditory data captured by camera traps.

subproject

Type of deployment group used to subdivide very large projects into more manageable units.

trigger

Sensor condition that prompts a camera to activate and capture media. Also used to indicate the series of consecutive media files resulting from that trigger. One or more triggers form a sequence. Also referred to as burst.

UUID

Universally Unique Identifier (UUID). A type of globally unique identifier that can be generated without a central registration authority. Example: 6d65f3e4-4770-407b-b2bf-878983bf9872.

video

Moving media file recorded by a camera. Has a specific duration and can include audio.

References

Ahumada J A, Fegraus E, Birch T, Flores N, Kays R, O’Brien T G, Palmer J, Schuttler S, Zhao J Y, Jetz W, Kinnaird M, Kulkarni S, Lyet A, Thau D, Duong M, Oliver R & Dancer A (2020) Wildlife Insights: A Platform to Maximize the Potential of Camera Trap and Other Passive Sensor Wildlife Data for the Planet. Environmental Conservation 47: 1–6. https://doi.org/10.1017/S0376892919000298

Alony I, Haski-Leventhal D, Lockstone-Binney L, Holmes K & Meijs L C P M (2020) Online volunteering at DigiVol: an innovative crowd-sourcing approach for heritage tourism artefacts preservation. Journal of Heritage Tourism 15: 14–26. https://doi.org/10.1080/1743873X.2018.1557665

Amatulli G, Domisch S, Tuanmu M-N, Parmentier B, Ranipeta A, Malczyk J & Jetz W (2018) A suite of global, cross-scale topographic variables for environmental and biodiversity modeling. Scientific Data 5: 180040. https://doi.org/10.1038/sdata.2018.40

Bayne E, MacPhail A, Copp C, Packer M & Klassen C (2018) Wildtrax. https://www.wildtrax.ca

Bennett J M, Calosi P, Clusella-Trullas S, Martínez B, Sunday J, Algar A C, Araújo M B, Hawkins B A, Keith S, Kühn I, Rahbek C, Rodríguez L, Singer A, Villalobos F, Ángel Olalla-Tárraga M & Morales-Castilla I (2018) GlobTherm, a global database on thermal tolerances for aquatic and terrestrial organisms. Scientific Data 5: 180022. https://doi.org/10.1038/sdata.2018.22

Bradley S (2017) MammalWeb – Participant guided development of a generalised citizen science web platform. http://edshare.soton.ac.uk/id/eprint/18337

Brides K, Middleton J, Leighton K & Grogan A (2018) The use of camera traps to identify individual colour-marked geese at a moulting site. Ringing & Migration 33: 19–22. https://doi.org/10.1080/03078698.2018.1525194

Bubnicki J, Norton B, Baskauf S, Bruce T, Cagnacci F, Casaer J, Churski M, Cromsigt J, Dal Farra S, Fiderer C, Forrester T, Hendry H, Heurich M, Hofmeester T, Jansen P, Kays R, Kuijper D, Liefting Y, Linnell J, Luskin M, Mann C, Milotic T, Newman P, Niedballa J, Oldoni D, Ossi F, Robertson T, Rovero F, Rowcliffe M, Seidenari L, Stachowicz I, Stowell D, Tobler M, Wieczorek J, Zimmermann F & Desmet P (2023) Camtrap DP: An open standard for the FAIR exchange and archiving of camera trap data. Behavior and Ethology. preprint https://doi.org/10.32942/X2BC8J

Bubnicki J W, Churski M & Kuijper D P J (2016) TRAPPER: an open source web‐based application to manage camera trapping projects. Poisot T (Ed). Methods in Ecology and Evolution 7: 1209–1216. https://doi.org/10.1111/2041-210X.12571

Buchhorn M, Lesiv M, Tsendbazar N-E, Herold M, Bertels L & Smets B (2020) Copernicus Global Land Cover Layers—Collection 2. Remote Sensing 12: 1044. https://doi.org/10.3390/rs12061044

Burton C, Neilson E, Moreira-Arce D, Ladle A, Steenweg R, Fisher J, Bayne E & Boutin S (2015) REVIEW: Wildlife camera trapping: a review and recommendations for linking surveys to ecological processes. Journal of Applied Ecology 52. https://doi.org/10.1111/1365-2664.12432

Bánki O, Roskov Y, Döring M & others (2023) Catalogue of Life Checklist. https://doi.org/10.48580/dfs6

Cadman M, González Talaván A, Athreya V, Chavan V, Ghosh-Harihar M, Hanssen F, Harihar A, Hirsch T, Lindgaard A, Mathur V, Mahlum F, Pandav B, Talukdar G & Vang R (2014) Publishing Camera Trap Data, a Best Practice Guide. Global Biodiversity Information Facility.

Caravaggi A, Burton A C, Clark D A, Fisher J T, Grass A, Green S, Hobaiter C, Hofmeester T R, Kalan A K, Rabaiotti D & Rivet D (2020) A review of factors to consider when using camera traps to study animal behavior to inform wildlife ecology and conservation. Conservation Science and Practice 2. https://doi.org/10.1111/csp2.239

Cartuyvels E, Adriaens T, Baert K, Baert W, Boiten G, Brosens D, Casaer J, De Boer A, Debrabandere M, Devisscher S, Donckers D, Dupont S, Franceus W, Fritz H, Fromme L, Gethöffer F, Herbots C, Huysentruyt F, Kehl L, Letheren L, Liebgott L, Liefting Y, Lodewijkx J, Maistrelli C, Matthies B, Meijvisch K, Moerkens D, Neukermans A, Neukermans B, Ronsijn J, Schamp K, Slootmaekers D, Van Der Beeck D & Desmet P (2022) MICA - Muskrat and coypu camera trap observations in Belgium, the Netherlands and Germany. https://doi.org/10.15468/5tb6ze

Casaer J, Milotic T, Liefting Y, Desmet P & Jansen P (2019) Agouti: A platform for processing and archiving of camera trap images. Biodiversity Information Science and Standards 3: e46690. https://doi.org/10.3897/biss.3.46690

Chapman A (2020) Current Best Practices for Generalizing Sensitive Species Occurrence Data. https://doi.org/10.15468/DOC-5JP4-5G10

Chapman A & Wieczorek J (2020) Georeferencing Best Practices. https://doi.org/10.15468/DOC-GG7H-S853

Chapman F M (1927) Who treads our trails? National geographic 52: 331–346.

Clements J F, Schulenberg T S, Iliff M J, Fredericks T A, Gerbracht J A, Lepage D, Billerman S M, Sullivan B L & Wood C L (2022) The eBird/Clements checklist of Birds of the World. https://www.birds.cornell.edu/clementschecklist/download/

Cusack J J, Dickman A J, Rowcliffe J M, Carbone C, Macdonald D W & Coulson T (2015) Random versus Game Trail-Based Camera Trap Placement Strategy for Monitoring Terrestrial Mammal Communities. Guralnick R (Ed). PLOS ONE 10: e0126373. https://doi.org/10.1371/journal.pone.0126373

DataCite Metadata Working Group (2021) DataCite Metadata Schema Documentation for the Publication and Citation of Research Data and Other Research Outputs v4.4. https://doi.org/10.14454/3W3Z-SA82

Delisle Z J, Flaherty E A, Nobbe M R, Wzientek C M & Swihart R K (2021) Next-Generation Camera Trapping: Systematic Review of Historic Trends Suggests Keys to Expanded Research Applications in Ecology and Conservation. Frontiers in Ecology and Evolution 9: 617996. https://doi.org/10.3389/fevo.2021.617996

Dinerstein E, Olson D, Joshi A, Vynne C, Burgess N D, Wikramanayake E, Hahn N, Palminteri S, Hedao P, Noss R, Hansen M, Locke H, Ellis E C, Jones B, Barber C V, Hayes R, Kormos C, Martin V, Crist E, Sechrest W, Price L, Baillie J E M, Weeden D, Suckling K, Davis C, Sizer N, Moore R, Thau D, Birch T, Potapov P, Turubanova S, Tyukavina A, Souza N de, Pintea L, Brito J C, Llewellyn O A, Miller A G, Patzelt A, Ghazanfar S A, Timberlake J, Klöser H, Shennan-Farpón Y, Kindt R, Lillesø J-P B, Breugel P van, Graudal L, Voge M, Al-Shammari K F & Saleem M (2017) An Ecoregion-Based Approach to Protecting Half the Terrestrial Realm. BioScience 67: 534–545. https://doi.org/10.1093/biosci/bix014

eMammal ( Study Design Recommendation for a Park. https://emammal.si.edu/about/study-design/park

European Environment Agency (EEA) (2021) EUNIS habitat classification. https://eunis.eea.europa.eu/habitats.jsp

Fegraus E H, Lin K, Ahumada J A, Baru C, Chandra S & Youn C (2011) Data acquisition and management software for camera trap data: A case study from the TEAM Network. Ecological Informatics 6: 345–353. https://doi.org/https://doi.org/10.1016/j.ecoinf.2011.06.003

Forrester T, O’Brien T, Fegraus E, Jansen P, Palmer J, Kays R, Ahumada J, Stern B & McShea W (2016) An Open Standard for Camera Trap Data. Biodiversity Data Journal 4: e10197. https://doi.org/10.3897/BDJ.4.e10197

Fortson L, Masters K, Nichol R, Edmondson E M, Lintott C, Raddick J & Wallin J (2012) Galaxy zoo: Morphological classification and citizen science. Advances in machine learning and data mining for astronomy 2012: 213–236.

GBIF Secretariat (2015) EML Agent Role Vocabulary. http://rs.gbif.org/vocabulary/gbif/agentRole

GBIF Secretariat (2018) Quick guide to publishing data through GBIF.org. GBIF. https://www.gbif.org/publishing-data

GBIF Secretariat (2020) Data quality requirements: Occurrence datasets. GBIF. https://www.gbif.org/data-quality-requirements-occurrences

GBIF Secretariat (2022) Diversifying the GBIF Data Model. GBIF. https://www.gbif.org/new-data-model

Glover‐Kapfer P, Soto‐Navarro C A & Wearn O R (2019) Camera‐trapping version 3.0: current constraints and future priorities for development. Rowcliffe M, Sollmann R (Eds). Remote Sensing in Ecology and Conservation 5: 209–223. https://doi.org/10.1002/rse2.106

Gomez Villa A, Salazar A & Vargas F (2017) Towards automatic wild animal monitoring: Identification of animal species in camera-trap images using very deep convolutional neural networks. Ecological Informatics 41: 24–32. https://doi.org/10.1016/j.ecoinf.2017.07.004

Green S E, Rees J P, Stephens P A, Hill R A & Giordano A J (2020) Innovations in Camera Trapping Technology and Approaches: The Integration of Citizen Science and Artificial Intelligence. Animals 10: 132. https://doi.org/10.3390/ani10010132

Groom Q, De Smedt S, Veríssimo Pereira N, Bogaerts A & Engledow H (2018) DoeDat, the Crowdsourcing Platform of Meise Botanic Garden. Biodiversity Information Science and Standards 2: e26803. https://doi.org/10.3897/biss.2.26803

Guillera-Arroita G, Ridout M S & Morgan B J T (2010) Design of occupancy studies with imperfect detection: Design of occupancy studies. Methods in Ecology and Evolution 1: 131–139. https://doi.org/10.1111/j.2041-210X.2010.00017.x

Hendry H & Mann C (2018) Camelot—intuitive software for camera-trap data management. Oryx 52: 15–15. https://doi.org/10.1017/S0030605317001818

Hobbs M T & Brehme C S (2017) An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates. Crowther MS (Ed). PLOS ONE 12: e0185026. https://doi.org/10.1371/journal.pone.0185026

Howe E J, Buckland S T, Després-Einspenner M-L & Kühl H S (2017) Distance sampling with camera traps. Matthiopoulos J (Ed). Methods in Ecology and Evolution 8: 1558–1565. https://doi.org/10.1111/2041-210x.12790

Hsing P Y, Bradley S, Kent V T, Hill R A, Smith G C, Whittingham M J, Cokill J, Crawley D, MammalWeb volunteers & Stephens P A (2018) Economical crowdsourcing for camera trap image classification. Rowcliffe M, Wearn O (Eds). Remote Sensing in Ecology and Conservation 4: 361–374. https://doi.org/10.1002/rse2.84

IUCN (2019) Habitats Classification Scheme. https://www.iucnredlist.org/resources/habitat-classification-scheme

Jones K E, Bielby J, Cardillo M, Fritz S A, O’Dell J, Orme C D L, Safi K, Sechrest W, Boakes E H, Carbone C, Connolly C, Cutts M J, Foster J K, Grenyer R, Habib M, Plaster C A, Price S A, Rigby E A, Rist J, Teacher A, Bininda-Emonds O R P, Gittleman J L, Mace G M & Purvis A (2009) PanTHERIA: a species‐level database of life history, ecology, and geography of extant and recently extinct mammals: Ecological Archives E090‐184. Michener WK (Ed). Ecology 90: 2648–2648. https://doi.org/10.1890/08-1494.1

Jung M, Dahal P R, Butchart S H M, Donald P F, De Lamo X, Lesiv M, Kapos V, Rondinini C & Visconti P (2020) A global map of terrestrial habitat types. Scientific Data 7: 256. https://doi.org/10.1038/s41597-020-00599-8

Kays R, Arbogast B S, Baker‐Whatton M, Beirne C, Boone H M, Bowler M, Burneo S F, Cove M V, Ding P, Espinosa S, Gonçalves A L S, Hansen C P, Jansen P A, Kolowski J M, Knowles T W, Lima M G M, Millspaugh J, McShea W J, Pacifici K, Parsons A W, Pease B S, Rovero F, Santos F, Schuttler S G, Sheil D, Si X, Snider M & Spironello W R (2020) An empirical evaluation of camera trap study design: How many, how long and when? Fisher D (Ed). Methods in Ecology and Evolution 11: 700–713. https://doi.org/10.1111/2041-210X.13370

Kolowski J M & Forrester T D (2017) Camera trap placement and the potential for bias due to trails and other features. Arlettaz R (Ed). PLOS ONE 12: e0186679. https://doi.org/10.1371/journal.pone.0186679

Kucera T E & Barrett R H (2011) A History of Camera Trapping. In: O’Connell AF, Nichols JD, Karanth KU (Eds), Camera Traps in Animal Ecology: Methods and Analyses. Springer Japan, Tokyo, 9–26. https://doi.org/10.1007/978-4-431-99495-4_2

Lamelas-Lopez L, Pardavila X, Amorim I & Borges P (2020) Wildlife inventory from camera-trapping surveys in the Azores (Pico and Terceira islands). Biodiversity Data Journal 8: e47865. https://doi.org/10.3897/BDJ.8.e47865

Lasky M (2016) North Carolina’s Candid Critters. https://emammal.si.edu/north-carolinas-candid-critters

Law B E, Arkebauer T, Campbell J L, Chen J, Sun O, Schwartz M, Ingen C van & Verma S (2008) Terrestrial carbon observations: Protocols for vegetation sampling and data submission.

Life MICA (2019) MICA - Management of Invasive Coypu and muskrAt in Europe. https://lifemica.eu/ (accessed 30 June 2023)

Mackenzie D I & Royle J A (2005) Designing occupancy studies: general advice and allocating survey effort: Designing occupancy studies. Journal of Applied Ecology 42: 1105–1114. https://doi.org/10.1111/j.1365-2664.2005.01098.x

Marcus Rowcliffe J, Carbone C, Jansen P A, Kays R & Kranstauber B (2011) Quantifying the sensitivity of camera traps: an adapted distance sampling approach: Quantifying camera trap sensitivity. Methods in Ecology and Evolution 2: 464–476. https://doi.org/10.1111/j.2041-210X.2011.00094.x

McIntyre T, Majelantle T L, Slip D J & Harcourt R G (2020) Quantifying imperfect camera-trap detection probabilities: implications for density modelling. Wildlife Research 47: 177. https://doi.org/10.1071/WR19040

Meek P, Fleming P, Ballard G, Claridge A, Banks P, Sanderson J & Swann D (2014) Camera Trapping Wildlife Management and Research. CSIRO publishing.

Meek P D, Ballard G, Falzon G, Williamson J, Milne H, Farrell R, Stover J, Mather-Zardain A T, Bishop J C, Cheung E K-W, Lawson C K, Munezero A M, Schneider D, Johnston B E, Kiani E, Shahinfar S, Sadgrove E J & Fleming P J S (2020) Camera Trapping Technology and Related Advances: into the New Millennium. Australian Zoologist 40: 392–403. https://doi.org/10.7882/AZ.2019.035

Newkirk E S (2016) CPW Photo Warehouse. http://cpw.state.co.us/learn/Pages/ResearchMammalsSoftware.aspx

Nguyen H, Maclagan S J, Nguyen T D, Nguyen T, Flemons P, Andrews K, Ritchie E G & Phung D (2017) Animal Recognition and Identification with Deep Convolutional Neural Networks for Automated Wildlife Monitoring. In: 2017 IEEE International Conference on Data Science and Advanced Analytics (DSAA). , 40–49. https://doi.org/10.1109/DSAA.2017.31

Norouzzadeh M S, Morris D, Beery S, Joshi N, Jojic N & Clune J (2021) A deep active learning system for species identification and counting in camera trap images. Methods in Ecology and Evolution 12: 150–161. https://doi.org/10.1111/2041-210X.13504

Oldoni D, Desmet P & Huybrechts P (2023) camtraptor: Read, Explore and Visualize Camera Trap Data Packages. https://inbo.github.io/camtraptor/

Oliveira B F, São-Pedro V A, Santos-Barrera G, Penone C & Costa G C (2017) AmphiBIO, a global database for amphibian ecological traits. Scientific Data 4: 170123. https://doi.org/10.1038/sdata.2017.123

O’Brien T G (2010) Wildlife picture index and biodiversity monitoring: issues and future directions: Wildlife picture index and biodiversity monitoring. Animal Conservation 13: 350–352. https://doi.org/10.1111/j.1469-1795.2010.00384.x

O’Connell A F, Nichols J D & Karanth K U (2011) Camera traps in animal ecology: Methods and analyses. Lightning Source UK.

O’Connor K M, Nathan L R, Liberati M R, Tingley M W, Vokoun J C & Rittenhouse T A G (2017) Camera trap arrays improve detection probability of wildlife: Investigating study design considerations using an empirical dataset. Bonter DN (Ed). PLOS ONE 12: e0175684. https://doi.org/10.1371/journal.pone.0175684

Price Tack J L, West B S, McGowan C P, Ditchkoff S S, Reeves S J, Keever A C & Grand J B (2016) AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images. Ecological Informatics 36: 145–151. https://doi.org/10.1016/j.ecoinf.2016.11.003

Resources Information Standards Committee (RISC) (2019) Wildlife Camera Metadata Protocol: Standards for Components of British Columbia’s Biodiversity No. 44. https://www2.gov.bc.ca/assets/download/DABCE3A5C7934410A8307285070C24EA

Riley S J, DeGloria S D & Elliot R (1999) A Terrain Ruggedness Index That Quantifies Topographic Heterogeneity. International Journal of Sciences 5: 23–27. https://download.osgeo.org/qgis/doc/reference-docs/Terrain_Ruggedness_Index.pdf

Robertson T, Döring M, Guralnick R, Bloom D, Wieczorek J, Braak K, Otegui J, Russell L & Desmet P (2014) The GBIF Integrated Publishing Toolkit: Facilitating the Efficient Publishing of Biodiversity Data on the Internet. Little DP (Ed). PLoS ONE 9: e102623. https://doi.org/10.1371/journal.pone.0102623

Rovero F, Tobler M & Sanderson J (2010) Camera trapping for inventorying terrestrial vertebrates. In: Manual on Field Recording Techniques and Protocols for All Taxa Biodiversity Inventories. AbcTaxa. https://biblio.naturalsciences.be/rbins-publications/abc-txa/abc-taxa-08/chapter-6.pdf

Rovero F & Zimmermann F eds. (2016) Camera trapping for wildlife research. Pelagic Publishing, Exeter, UK.

Rovero F, Zimmermann F, Berzi D & Meek P (2013) “Which camera trap type and how many do I need?” A review of camera features and study designs for a range of wildlife research applications. Hystrix 24. https://doi.org/10.4404/hystrix-24.2-8789

Rowcliffe J M, Field J, Turvey S T & Carbone C (2008) Estimating Animal Density Using Camera Traps without the Need for Individual Recognition. Journal of Applied Ecology 45: 1228–1236. https://www.jstor.org/stable/20144086 (accessed 29 June 2023)

Rowcliffe J M, Jansen P A, Kays R, Kranstauber B & Carbone C (2016) Wildlife speed cameras: measuring animal travel speed and day range using camera traps. Pettorelli N (Ed). Remote Sensing in Ecology and Conservation 2: 84–94. https://doi.org/10.1002/rse2.17

Shannon G, Lewis J S & Gerber B D (2014) Recommended survey designs for occupancy modelling using motion-activated cameras: insights from empirical wildlife data. PeerJ 2: e532. https://doi.org/10.7717/peerj.532

Simpson R, Page K R & De Roure D (2014) Zooniverse: observing the world’s largest citizen science platform. In: Proceedings of the 23rd International Conference on World Wide Web. ACM, Seoul Korea, 1049–1054. https://doi.org/10.1145/2567948.2579215

Sollmann R, Gardner B & Belant J L (2012) How Does Spatial Study Design Influence Density Estimates from Spatial Capture-Recapture Models? Waterman JM (Ed). PLoS ONE 7: e34575. https://doi.org/10.1371/journal.pone.0034575

Soria C D, Pacifici M, Di Marco M, Stephen S M & Rondinini C (2021) COMBINE: a coalesced mammal database of intrinsic and extrinsic traits. Ecology 102. https://doi.org/10.1002/ecy.3344

Sun C, Beirne C, Burgar J M, Howey T, Fisher J T & Burton A C (2021) Simultaneous monitoring of vegetation dynamics and wildlife activity with camera traps to assess habitat change. Rowcliffe M, Hofmeester T (Eds). Remote Sensing in Ecology and Conservation 7: 666–684. https://doi.org/10.1002/rse2.222

Sunarto S, Sollmann R, Mohamed A & Kelly M J (2013) Camera trapping for the study and conservation of tropical carnivores. The Raffles Bulletin of Zoology 28: 21–42. http://zoobank.org/urn:lsid:zoobank.org:pub:804A6DC9-A92A-41AE-A820-F3DA48614761

Swanson A, Kosmala M, Lintott C, Simpson R, Smith A & Packer C (2015) Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna. Scientific Data 2: 150026. https://doi.org/10.1038/sdata.2015.26

Tobias J A, Sheard C, Pigot A L, Devenish A J M, Yang J, Sayol F, Neate‐Clegg M H C, Alioravainen N, Weeks T L, Barber R A, Walkden P A, MacGregor H E A, Jones S E I, Vincent C, Phillips A G, Marples N M, Montaño‐Centellas F A, Leandro‐Silva V, Claramunt S, Darski B, Freeman B G, Bregman T P, Cooney C R, Hughes E C, Capp E J R, Varley Z K, Friedman N R, Korntheuer H, Corrales‐Vargas A, Trisos C H, Weeks B C, Hanz D M, Töpfer T, Bravo G A, Remeš V, Nowak L, Carneiro L S, Moncada R. A J, Matysioková B, Baldassarre D T, Martínez‐Salinas A, Wolfe J D, Chapman P M, Daly B G, Sorensen M C, Neu A, Ford M A, Mayhew R J, Fabio Silveira L, Kelly D J, Annorbah N N D, Pollock H S, Grabowska‐Zhang A M, McEntee J P, Carlos T. Gonzalez J, Meneses C G, Muñoz M C, Powell L L, Jamie G A, Matthews T J, Johnson O, Brito G R R, Zyskowski K, Crates R, Harvey M G, Jurado Zevallos M, Hosner P A, Bradfer‐Lawrence T, Maley J M, Stiles F G, Lima H S, Provost K L, Chibesa M, Mashao M, Howard J T, Mlamba E, Chua M A H, Li B, Gómez M I, García N C, Päckert M, Fuchs J, Ali J R, Derryberry E P, Carlson M L, Urriza R C, Brzeski K E, Prawiradilaga D M, Rayner M J, Miller E T, Bowie R C K, Lafontaine R M, Scofield R P, Lou Y, Somarathna L, Lepage D, Illif M, Neuschulz E L, Templin M, Dehling D M, Cooper J C, Pauwels O S G, Analuddin K, Fjeldså J, Seddon N, Sweet P R, DeClerck F A J, Naka L N, Brawn J D, Aleixo A, Böhning‐Gaese K, Rahbek C, Fritz S A, Thomas G H & Schleuning M (2022) AVONET: morphological, ecological and geographical data for all birds. Coulson T (Ed). Ecology Letters 25: 581–597. https://doi.org/10.1111/ele.13898

Tobler M W, Carrillo-Percastegui S E, Leite Pitman R, Mares R & Powell G (2008) An evaluation of camera traps for inventorying large- and medium-sized terrestrial rainforest mammals. Animal Conservation 11: 169–178. https://doi.org/10.1111/j.1469-1795.2008.00169.x

Tobler M W & Powell G V N (2013) Estimating jaguar densities with camera traps: Problems with current designs and recommendations for future studies. Biological Conservation 159: 109–118. https://doi.org/10.1016/j.biocon.2012.12.009

Van Berkel T (2014) Camera trapping for wildlife conservation: Expedition field techniques. Geography Outdoors.

Vegetation Subcommittee (2016) U.S. National Vegetation Classification. https://usnvc.org/

Waller J (2020) GBIF occurrence license processing. GBIF. https://data-blog.gbif.org/post/gbif-occurrence-license-processing/

Wearn O R & Glover-Kapfer P (2017) Camera-trapping for conservation: a guide to best-practices. Research Gate. https://doi.org/10.13140/RG.2.2.23409.17767

Wearn O R, Rowcliffe J M, Carbone C, Bernard H & Ewers R M (2013) Assessing the Status of Wild Felids in a Highly-Disturbed Commercial Forest Reserve in Borneo and the Implications for Camera Trap Survey Design. Cameron EZ (Ed). PLoS ONE 8: e77598. https://doi.org/10.1371/journal.pone.0077598

Weinstein B G (2018) A computer vision for animal ecology. Prugh L (Ed). Journal of Animal Ecology 87: 533–545. https://doi.org/10.1111/1365-2656.12780

WildCAM (2018) Wildlife Cameras for Adaptive Management. https://wildcams.ca/

Wilkinson M D, Dumontier M, Aalbersberg I J J, Appleton G, Axton M, Baak A, Blomberg N, Boiten J-W, Silva Santos L B da, Bourne P E, Bouwman J, Brookes A J, Clark T, Crosas M, Dillo I, Dumon O, Edmunds S, Evelo C T, Finkers R, Gonzalez-Beltran A, Gray A J G, Groth P, Goble C, Grethe J S, Heringa J, Hoen P A C ’t, Hooft R, Kuhn T, Kok R, Kok J, Lusher S J, Martone M E, Mons A, Packer A L, Persson B, Rocca-Serra P, Roos M, Schaik R van, Sansone S-A, Schultes E, Sengstag T, Slater T, Strawn G, Swertz M A, Thompson M, Lei J van der, Mulligen E van, Velterop J, Waagmeester A, Wittenburg P, Wolstencroft K, Zhao J & Mons B (2016) The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data 3. https://doi.org/10.1038/sdata.2016.18

Wilman H, Belmaker J, Simpson J, De La Rosa C, Rivadeneira M M & Jetz W (2014) EltonTraits 1.0: Species-level foraging attributes of the world’s birds and mammals: Ecological Archives E095-178. Ecology 95: 2027–2027. https://doi.org/10.1890/13-1917.1

WoRMS Editorial Board (2023) World Register of Marine Species (WoRMS). https://www.marinespecies.org

Yang H, Li S, Chen J, Zhang X & Xu S (2017) The Standardization and Harmonization of Land Cover Classification Systems towards Harmonized Datasets: A Review. ISPRS International Journal of Geo-Information 6: 154. https://doi.org/10.3390/ijgi6050154

Young S, Rode-Margono J & Amin R (2018) Software to facilitate and streamline camera trap data management: A review. Ecology and Evolution 8: 9947–9957. https://doi.org/10.1002/ece3.4464

Yousif H, Yuan J, Kays R & He Z (2018) Object detection from dynamic scene using joint background modeling and fast deep learning classification. Journal of Visual Communication and Image Representation 55: 802–815. https://doi.org/10.1016/j.jvcir.2018.08.013

Zaragozí B, Belda A, Giménez P, Navarro J T & Bonet A (2015) Advances in camera trap data management tools: Towards collaborative development and integration with GIS. Ecological Informatics 30: 6–11. https://doi.org/10.1016/j.ecoinf.2015.08.001

Zhao M, Heinsch F A, Nemani R R & Running S W (2005) Improvements of the MODIS terrestrial gross and net primary production global data set. Remote Sensing of Environment 95: 164–176. https://doi.org/10.1016/j.rse.2004.12.011