dc.contributor.author | Cano, Alberto | |
dc.contributor.author | Zafra Gómez, Amelia | |
dc.contributor.author | Ventura Soto, S. | |
dc.date.accessioned | 2017-01-19T12:51:04Z | |
dc.date.available | 2017-01-19T12:51:04Z | |
dc.date.issued | 2017 | |
dc.identifier.uri | http://hdl.handle.net/10396/14344 | |
dc.description.abstract | Multiple instance learning is a challenging task in supervised learning and data mining. How-
ever, algorithm performance becomes slow when learning from large-scale and high-dimensional data sets.
Graphics processing units (GPUs) are being used for reducing computing time of algorithms. This paper
presents an implementation of the G3P-MI algorithm on GPUs for solving multiple instance problems
using classification rules. The GPU model proposed is distributable to multiple GPUs, seeking for its scal-
ability across large-scale and high-dimensional data sets. The proposal is compared to the multi-threaded
CPU algorithm with SSE parallelism over a series of data sets. Experimental results report that the com-
putation time can be significantly reduced and its scalability improved. Specifically, an speedup of up
to 149× can be achieved over the multi-threaded CPU algorithm when using four GPUs, and the rules
interpreter achieves great efficiency and runs over 108 billion Genetic Programming operations per second. | es_ES |
dc.format.mimetype | application/pdf | es_ES |
dc.language.iso | eng | es_ES |
dc.rights | https://creativecommons.org/licenses/by-nc-nd/4.0/ | es_ES |
dc.subject | Multi-instance learning | es_ES |
dc.subject | Classification | es_ES |
dc.subject | Parallel computing | es_ES |
dc.subject | GPU | es_ES |
dc.title | Speeding up Multiple Instance Learning Classification Rules on GPUs | es_ES |
dc.type | info:eu-repo/semantics/preprint | es_ES |
dc.relation.projectID | Gobierno de España. TIN-2011-22408 | es_ES |
dc.relation.projectID | Gobierno de España. FPU-AP2010-0042 | es_ES |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | es_ES |