<?xml version="1.0" encoding="UTF-8" ?>
<modsCollection xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.loc.gov/mods/v3" xmlns:slims="http://slims.web.id" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-3.xsd">
<mods version="3.3" id="58987">
 <titleInfo>
  <title>Edge Computing Implementation for Action Recognition Systems</title>
 </titleInfo>
 <name type="Personal Name" authority="">
  <namePart>Pratama Afis Asryullah</namePart>
  <role>
   <roleTerm type="text">Primary Author</roleTerm>
  </role>
 </name>
 <typeOfResource manuscript="no" collection="yes">mixed material</typeOfResource>
 <genre authority="marcgt">bibliography</genre>
 <originInfo>
  <place>
   <placeTerm type="text">Semarang</placeTerm>
   <publisher>Universitas Negeri Semarang</publisher>
   <dateIssued>2020</dateIssued>
  </place>
 </originInfo>
 <language>
  <languageTerm type="code">e</languageTerm>
  <languageTerm type="text">English</languageTerm>
 </language>
 <physicalDescription>
  <form authority="gmd">Artikel Jurnal</form>
  <extent>hlm : 303-315</extent>
 </physicalDescription>
 <relatedItem type="series">
  <titleInfo/>
  <title>Scientific Journal of Informatics</title>
 </relatedItem>
</mods>
<note>&#13;
Abstract&#13;
Nowadays the deep learning has been improved to many different sectors, including human action recognition system. This system mostly needs a high computing resource to work on. In its implementation, it will be built under cloud computing architecture which requires sensors used to send whole raw data to the cloud which puts a load in the networks. Therefore, edge computing system exists to overcome that weakness. This paper presents a method to recognize human action using deep learning with edge computing architecture. With RGB image as the input, this system will detect all persons in the frame using SSD-Mobilenet V2 model with various threshold values, then recognize every personâ€™s action using our trained model with DetectNet architecture in various threshold too. The output of the system is detected personâ€™s RoI and its recognized action action, which a lot smaller than the whole frame. As a result, our proposed system yields the best accuracy of human detection at 64.06% with a threshold at 0.15 and the best accuracy of action recognition atÂ  37.8% with a threshold at 0.4.&#13;
</note>
<note type="statement of responsibility"></note>
<subject authority="">
 <topic>Informatika</topic>
</subject>
<classification>SJI</classification>
<identifier type="isbn">24077658</identifier>
<location>
 <physicalLocation>Perpustakaan Teknik UPI YAI </physicalLocation>
 <shelfLocator>SJI V7N2 November 2020</shelfLocator>
 <holdingSimple>
  <copyInformation>
   <numerationAndChronology type="1">SJI2-014</numerationAndChronology>
   <sublocation>Perpustakaan FT UPI YAI</sublocation>
   <shelfLocator>SJI V7N2 November 2020</shelfLocator>
  </copyInformation>
 </holdingSimple>
</location>
<slims:image>SJI_V7N2_November_2020.jpg.jpg</slims:image>
<recordInfo>
 <recordIdentifier>58987</recordIdentifier>
 <recordCreationDate encoding="w3cdtf">2023-02-08 13:34:22</recordCreationDate>
 <recordChangeDate encoding="w3cdtf">2023-02-08 13:34:22</recordChangeDate>
 <recordOrigin>machine generated</recordOrigin>
</recordInfo>
</modsCollection>