This item is licensed Korea Open Government License
dc.contributor.author
심철
dc.contributor.author
최민
dc.contributor.author
차광호
dc.date.accessioned
2022-01-11T08:49:25Z
dc.date.available
2022-01-11T08:49:25Z
dc.date.issued
2019-01-31
dc.identifier.issn
1386-7857
dc.identifier.uri
https://repository.kisti.re.kr/handle/10580/16225
dc.description.abstract
Cloud computing services are provided by key roles of data centers in which homogeneous and heterogeneous computation nodes are connected by high speed interconnection network. The rapid development of cloud-based services and applications has made data center networks more difficult. The PCI Express is a widely used system bus technology that connects processors and peripheral I/O devices. So, the PCI Express is regarded as a de-facto standard in system area interconnection network. It is currently validating the possibility of using PCI Express as a system interconnection network in areas such as high-performance computers and cluster/cloud computing. With the development of PCI Express non-transparent bridge (NTB) technology, the PCI Express has become available as a system interconnection network. NTB allows two PCI Express subsystems to be interconnected and, if necessary, isolated from each other. Partitioned global address space (PGAS) is one of the shared address space programming models. Due to the recent spread of multicore processors, PGAS has been attracting attention as a parallel computing framework. We make use of the PCI Express NTB to realize the PGAS shared address space model. In this paper, we designed and implemented the interconnection network using PCI Express x8 using a RDK, the PEX8749 based PCI Express evaluation board. We performed some Openshmem applications from Github to verify the accuracy of our initial OpenSHMEM API implementation.
dc.language.iso
eng
dc.publisher
SPRINGER
dc.relation.ispartofseries
CLUSTER COMPUTING;
dc.title
Design and implementation of initial OpenSHMEM on PCIe NTB based cloud computing