Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
An Intelligent Video Analysis Method for Abnormal Event Detection in Intelligent Transportation Systems
Department of Computer Science and Engineering, Shaoxing University, Shaoxing, China; School of Information and Safety Engineering, Zhongnan University of Economics and Law, Wuhan, China; State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China.
School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, China.
College of Computer Science, Huaqiao University, Xiamen, China.
Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.ORCID iD: 0000-0003-4228-2774
2021 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 22, no 7, p. 4487-4495, article id 9190063Article in journal (Refereed) Published
Abstract [en]

Intelligent transportation systems pervasively deploy thousands of video cameras. Analyzing live video streams from these cameras is of significant importance to public safety. As streaming video is increasing, it becomes infeasible to have human operators sitting in front of hundreds of screens to catch suspicious activities or detect objects of interests in real-time. Actually, with millions of traffic surveillance cameras installed, video retrieval is more vital than ever. To that end, this article proposes a long video event retrieval algorithm based on superframe segmentation. By detecting the motion amplitude of the long video, a large number of redundant frames can be effectively removed from the long video, thereby reducing the number of frames that need to be calculated subsequently. Then, by using a superframe segmentation algorithm based on feature fusion, the remaining long video is divided into several Segments of Interest (SOIs) which include the video events. Finally, the trained semantic model is used to match the answer generated by the text question, and the result with the highest matching value is considered as the video segment corresponding to the question. Experimental results demonstrate that our proposed long video event retrieval and description method which significantly improves the efficiency and accuracy of semantic description, and significantly reduces the retrieval time.

Place, publisher, year, edition, pages
IEEE, 2021. Vol. 22, no 7, p. 4487-4495, article id 9190063
Keywords [en]
Intelligent transportation systems, long video event retrieval, question-answering, segment of interest, superframe segmentation
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:umu:diva-186435DOI: 10.1109/TITS.2020.3017505ISI: 000673518500053Scopus ID: 2-s2.0-85110825263OAI: oai:DiVA.org:umu-186435DiVA, id: diva2:1582525
Available from: 2021-08-02 Created: 2021-08-02 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Gu, Zonghua

Search in DiVA

By author/editor
Gu, Zonghua
By organisation
Department of Applied Physics and Electronics
In the same journal
IEEE transactions on intelligent transportation systems (Print)
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 404 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf