Weaving a wider spidersilk: Optimizing ads placement using web crawl data

College

College of Science

Document Type

Archival Material/Manuscript

Publication Date

1-2021

Abstract

This research was able to build a proof of concept for creating an algorithm that can extract commonalities between webpages through their links contained in the common crawl dataset. With this, the information on the level of similarity can be elevated to ads platforms where the webpages connecting them can be analyzed further through association rules generated in implementing the Frequent Itemset Mining process. These rules aid in giving insights regarding the similarity in the rollouts by ads platforms, showing how extensive the commonalities are in the connections to different webpages. With keywords prefiltering, an applied contextual layer enhances the algorithm as it caters to more specific industries enabling a targeting mechanism making it more powerful in placing ads where an intended user wants a specific content to be improved in visibility.

html

Disciplines

Computer Sciences

Keywords

Internet advertising; Web sites—Design

Upload File

wf_no

This document is currently not available here.

Share

COinS