<?xml version="1.0" encoding="UTF-8" ?>
<modsCollection xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.loc.gov/mods/v3" xmlns:slims="http://slims.web.id" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-3.xsd">
<mods version="3.3" id="59312">
 <titleInfo>
  <title>BESKlus :</title>
  <subTitle>BERT Extractive Summarization with K-Means Clustering in Scientific Paper</subTitle>
 </titleInfo>
 <name type="Personal Name" authority="">
  <namePart>Samosir Feliks Victor Parningotan</namePart>
  <role>
   <roleTerm type="text">Primary Author</roleTerm>
  </role>
 </name>
 <typeOfResource manuscript="no" collection="yes">mixed material</typeOfResource>
 <genre authority="marcgt">bibliography</genre>
 <originInfo>
  <place>
   <placeTerm type="text">Bandung</placeTerm>
   <publisher>Maranatha University Press</publisher>
   <dateIssued>2022</dateIssued>
  </place>
 </originInfo>
 <language>
  <languageTerm type="code"></languageTerm>
  <languageTerm type="text">Indonesia</languageTerm>
 </language>
 <physicalDescription>
  <form authority="gmd">Artikel Jurnal</form>
  <extent>hlm : 202-217</extent>
 </physicalDescription>
 <relatedItem type="series">
  <titleInfo/>
  <title>JUTISI : Jurnal Teknik Informatika dan Sistem Informasi</title>
 </relatedItem>
</mods>
<note>&#13;
Abstract&#13;
This study aims to propose methods and models for extractive text summarization with contextual embedding. To build this model, a combination of traditional machine learning algorithms such as K-Means Clustering and the latest BERT-based architectures such as Sentence-BERT (SBERT) is carried out. The contextual embedding process will be carried out at the sentence level by SBERT. Embedded sentences will be clustered and the distance calculated from the centroid. The top sentences from each cluster will be used as summary candidates. The dataset used in this study is a collection of scientific journals from NeurIPS. Performance evaluation carried out with ROUGE-L gave a result of 15.52% and a BERTScore of 85.55%. This result surpasses several previous models such as PyTextRank and BERT Extractive Summarizer. The results of these measurements prove that the use of contextual embedding is very good if applied to extractive text summarization which is generally done at the sentence level.&#13;
</note>
<note type="statement of responsibility"></note>
<subject authority="">
 <topic>Informatika</topic>
</subject>
<subject authority="">
 <topic>Sistem Informasi</topic>
</subject>
<classification>JUTISI</classification>
<identifier type="isbn">24432210</identifier>
<location>
 <physicalLocation>Perpustakaan Teknik UPI YAI </physicalLocation>
 <shelfLocator>JUTISI V8N1 April 2022</shelfLocator>
 <holdingSimple>
  <copyInformation>
   <numerationAndChronology type="1">JUTISI7a-017</numerationAndChronology>
   <sublocation>Perpustakaan FT UPI YAI</sublocation>
   <shelfLocator>JUTISI V8N1 April 2022</shelfLocator>
  </copyInformation>
  <copyInformation>
   <numerationAndChronology type="1">JUTISI7b-017</numerationAndChronology>
   <sublocation>Perpustakaan FT UPI YAI</sublocation>
   <shelfLocator>JUTISI V8N1 April 2022</shelfLocator>
  </copyInformation>
 </holdingSimple>
</location>
<slims:image>cover_issue_174_en_US.png.png</slims:image>
<recordInfo>
 <recordIdentifier>59312</recordIdentifier>
 <recordCreationDate encoding="w3cdtf">2023-03-01 10:18:05</recordCreationDate>
 <recordChangeDate encoding="w3cdtf">2023-03-01 10:18:05</recordChangeDate>
 <recordOrigin>machine generated</recordOrigin>
</recordInfo>
</modsCollection>