User:Joce Strad/sandbox
Pages I worked on throughout this project:
Deep web, Darknet, Wildlife trade, Darknet Market
I already made the following Edits to the Dark web page:
Many individual journalists, alternative news organizations, and educators or researchers are influential in their writing and speaking of the Darknet, and making its use clear to the general public.
Jamie Bartlett is a journalist and tech blogger for The Telegraph and Director of The Centre for the Analysis of Social Media for Demos in conjunction with The University of Sussex. In his book, The Dark Net,[77] Barlett depicts the world of the Darknet and its implications for human behavior in different context. For example, the book opens with the story of a young girl who seeks positive feedback to build her self-esteem by appearing naked online. She is eventually traced on social media sites where her friends and family were inundated with naked pictures of herself. This story highlights the variety of human interactions the Darknet allows for, but also reminds the reader how participation in a overlay network like the Darknet is rarely in complete separation from the larger Web. Bartlett's many objective is exploration of the Darknet and its implication for society. He explores different sub-cultures, some with positive positive implications for society and some with negative.[78]
Bartlett gave a TEDTalk in June 2015 further examining the subject.[79] His talk, entitled "How the mysterious Darknet is going mainstream", introduces the idea behind the Darknet to the audience, followed by a walkthrough example of one of its websites called the Silk Road. He point out the familiarity of webpage design similar to consumer sites used in the larger commercial Web. Bartlett then presents examples of how operating in an uncertain, high-risk market like those in the Darknet actually breeds innovation that he believes can be applied to all markets in the future. As he points out, because vendors are always thinking of new ways to get around and protect themselves, the Darknet has become more decentralized, more customer friendly, harder to sensor, and more innovative. As our societies are increasingly searching for more ways to retain privacy online, such changes as those occurring in the Darknet are not only innovative, but could be beneficial to commercial online websites and markets.
Vice News is Vice Media, Inc.'s current affairs channel, producing daily documentary essays and video through its website and YouTube channel. It promotes itself on its coverage of under-reported and off-stream stories. Vice News was created in December 2013 and is based in New York City, though it has bureaus worldwide. Since its creation, Vice News has covered emerging events and widespread issues around the world in a unique way that traditional media channels cannot or do not. As an "alternative" news station, it is able to cover many controversial, un-edited topics. One of these is the Darknet. Vice has many stories now examining different aspects of the Darknet and its underground markets, including examination of illegal animal hunting, child pornography, and drugs on the Darknet.
Traditional media and news channels like ABC News have also featured articles examining the Darknet.[80] Vanity fair published an article in October of 2016 entitled The Other Internet. The article discusses the rise of the Dark Net and mentions that the stakes have become high in a lawless digital wilderness. It mentions that vulnerability is a weakness in a network's defenses. Other topics include the e-commerce versions of conventional black markets, cyberweaponry from TheRealDeal, and role of operations security.[81]
- Made many of the changes suggested in my Peer Reviews: Increasing Links, beginning to add content to second article, ect.
- Made many hyperlinks on the Dark Web page to the appropriate Wiki pages. (Dark web)
- Made a line with reference to two sources explaining the difference between the Darknet and the Deepweb. "Darknet is often confused with the Deep Web (or Deep net). While the Deep web is reference to any site that cannot be accessed through a traditional search engine, the Dark net is then a small and classified portion of the Deep Web that has been intentionally hidden and is inaccessible through standard browsers.[1][2]"
- Added this clarification paragraph to both pages.
My peer reviewed sources are as follows:
1) BROOKE Z. A MARKETER'S GUIDE TO THE DARK WEB. Marketing Insights [serial online]. Spring2016 2016;28(1):23-27. Available from: Business Source Complete, Ipswich, MA. Accessed October 25, 2016.
- Edit Made:
- Social Media
- There exists within the Dark Web emerging social media platforms similar to those on the World Wide Web. Facebook and other traditional social media platforms have begun to make Dark Web versions of their websites to address problems associated with the traditional platforms and to continue their service in all areas of the World Wide Web. [3]
2) HARRISON, JR; ROBERTS, DL; HERNANDEZ-CASTRO, J. Assessing the extent and nature of wildlife trade on the dark web. Conservation Biology. 30, 4, 900-904, Aug. 2016. ISSN: 08888892.
Edit Made:
Illegal Animal Trade
[edit]Within the markets existing in the Dark web, illegal animal trade has increasingly appeared in Dark web markets after scrutiny and regulation increased in the World Wide Web. However the amount of activity is still negligible compared to the amount on the open web. As stated in an examination of search engine key words relating to wildlife trade, "This negligible level of activity related to the illegal trade of wildlife on the dark web relative to the open and increasing trade on the surface web may indicate a lack of successful enforcement against illegal wildlife trade on the surface web."[4]
Edit Made to Wildlife trade page:
Online Illegal Trade
[edit]Through both deep web (password protected, encrypted) and dark web (special portal browsers) markets, participants can trade and transact illegal substances, including wildlife. However the amount of activity is still negligible compared to the amount on the open or surface web. As stated in an examination of search engine key words relating to wildlife trade in an article published by Conservation Biology, "This negligible level of activity related to the illegal trade of wildlife on the dark web relative to the open and increasing trade on the surface web may indicate a lack of successful enforcement against illegal wildlife trade on the surface web."[5]
3)MOORE, D; RID, T. Cryptopolitik and the Darknet. Survival (00396338). 58, 1, 7-38, Feb. 2016. ISSN: 00396338.
Edit Made:
One of the important features of bitcoin and similar services in the Dark web are their encryption policy, which is becoming a test of the values of liberal democracy in the twenty-first century.[6]
4)BANCROFT, A; SCOTT REID, P. Research paper: Concepts of illicit drug quality among darknet market users: Purity, embodied experience, craft and chemical knowledge. International Journal of Drug Policy. 35, Drug Cryptomarkets, 42-49, Sept. 1, 2016. ISSN: 0955-3959.
- Edit Made:
- Added academic citation to "which mediate transactions for illegal drugs[7]"
- 5) OMAND, D. The Dark Net: Policing the Internet's Underworld. World Policy Journal. 32, 4, 75, Dec. 2015. ISSN: 07402775.
- Edit Made:
- Cyber crimes and hacking services for financial institutions and banks have also been offered over the Dark web.[8]
- 6) NISHIKAZE, H; et al. Large-Scale Monitoring for Cyber Attacks by Using Cluster Information on Darknet Traffic Features. Procedia Computer Science. 53, INNS Conference on Big Data 2015 Program San Francisco, CA, USA 8-10 August 2015, 175-182, Jan. 1, 2015. ISSN: 1877-0509.
- Edit Made:
- Attempts to monitor this activity has been made through various government and private organizations, and an examination of the tools used can be found in the Procedia Computer Science journal.[9]
- 7)RHUMORBARBE, D; et al. Buying drugs on a Darknet market: A better deal? Studying the online illicit drug market through the analysis of digital, physical and chemical data. Forensic Science International. 267, 173-182, Oct. 1, 2016. ISSN: 0379-0738.
- Edit made:
- Examination of price differences in Dark web markets versus prices in real life or over the World Wide Web have been attempted as well as studies in the quality of goods received over the Dark web. One such study performed on the quality of illegal drugs found in Evolution, one of the most popular cryptomarkets active from January 2014 to March 2015.[10] An example of analytical findings included that digital information, such as concealment methods and shipping country, seems accurate,"but the illicit drugs purity is found to be different from the information indicated on their respective listings."[11]
- 8) OLDSBOROUGH, R. Stay Clear of the Darknet. Tech Directions. 75, 7, 12, Mar. 2016. ISSN: 10629351.
- Edit Made:
- Added reference and citation to Diabolus Market.
- 9) VAN BUSKIRK, J; et al. Characterising dark net marketplace purchasers in a sample of regular psychostimulant users. International Journal of Drug Policy. 35, 32-37, Sept. 2016. ISSN: 09553959.
- Edit Made:
- Less is known about consumer motivations for accessing these marketplaces and factors associated with their use.[12]
- 10) FACHKHA, C; BOU-HARB, E; DEBBABI, M. Inferring distributed reflection denial of service attacks from darknet. Computer Communications. 62, 59-71, May 15, 2015. ISSN: 0140-3664.
- Edit Made:
- Use of Internet-scale DNS Distributed Reflection Denial of Service (DRDoS) attacks have also been made through leveraging the Dark Web.[45]
- 11) Bergman, Michael K. "White paper: the deep web: surfacing hidden value." Journal of electronic publishing 7.1 (2001).
- The deep Web is qualitatively different from the surface Web. Deep Web sources store their content in searchable databases that only produce results dynamically in response to a direct request. But a direct query is a "one at a time" laborious way to search. BrightPlanet's search technology automates the process of making dozens of direct queries simultaneously using multiple-thread technology and thus is the only search technology, so far, that is capable of identifying, retrieving, qualifying, classifying, and organizing both "deep" and "surface" content.
- 12) He, Bin, et al. "Accessing the deep web." Communications of the ACM 50.5 (2007): 94-101.
- Attempting to locate and quantify material on the Web that is hidden from typical search techniques.
- 13) Madhavan, Jayant, et al. "Google's deep web crawl." Proceedings of the VLDB Endowment 1.2 (2008): 1241-1252.
- Surfacing the Deep Web poses several challenges. First, our goal is to index the content behind many millions of HTML forms that span many languages and hundreds of domains. This necessitates an approach that is completely automatic, highly scalable, and very efficient. Second, a large number of forms have text inputs and require valid inputs values to be submitted. We present an algorithm for selecting input values for text search inputs that accept keywords and an algorithm for identifying inputs which accept only values of a specific type. Third, HTML forms often have more than one input and hence a naive strategy of enumerating the entire Cartesian product of all possible inputs can result in a very large number of URLs being generated. We present an algorithm that efficiently navigates the search space of possible input combinations to identify only those that generate URLs suitable for inclusion into our web search index. We present an extensive experimental evaluation validating the effectiveness of our algorithms.
- 14)Liu, Wei, Xiaofeng Meng, and Weiyi Meng. "Vide: A vision-based approach for deep web data extraction." IEEE Transactions on Knowledge and Data Engineering 22.3 (2010): 447-460. Deep Web contents are accessed by queries submitted to Web databases and the returned data records are enwrapped in dynamically generated Web pages (they will be called deep Web pages in this paper). Extracting structured data from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a large number of techniques have been proposed to address this problem, but all of them have inherent limitations because they are Web-page-programming-language-dependent. As the popular two-dimensional media, the contents on Web pages are always displayed regularly for users to browse. This motivates us to seek a different way for deep Web data extraction to overcome the limitations of previous works by utilizing some interesting common visual features on the deep Web pages. In this paper, a novel vision-based approach that is Web-page-programming-language-independent is proposed. This approach primarily utilizes the visual features on the deep Web pages to implement deep Web data extraction, including data record extraction and data item extraction. We also propose a new evaluation measure revision to capture the amount of human effort needed to produce perfect extraction. Our experiments on a large set of Web databases show that the proposed vision-based approach is highly effective for deep Web data extraction.
- 15) Wu, Wensheng, et al. "An interactive clustering-based approach to integrating source query interfaces on the deep web." Proceedings of the 2004 ACM SIGMOD international conference on Management of data. ACM, 2004.
- An increasing number of data sources now become available on the Web, but often their contents are only accessible through query interfaces. For a domain of interest, there often exist many such sources with varied coverage or querying capabilities. As an important step to the integration of these sources, we consider the integration of their query interfaces. More specifically, we focus on the crucial step of the integration: accurately matching the interfaces. While the integration of query interfaces has received more attentions recently, current approaches are not sufficiently general: (a) they all model interfaces with flat schemas; (b) most of them only consider 1:1 mappings of fields over the interfaces; (c) they all perform the integration in a blackbox-like fashion and the whole process has to be restarted from scratch if anything goes wrong; and (d) they often require laborious parameter tuning. In this paper, we propose an interactive, clustering-based approach to matching query interfaces. The hierarchical nature of interfaces is captured with ordered trees. Varied types of complex mappings of fields are examined and several approaches are proposed to effectively identify these mappings. We put the human integrator back in the loop and propose several novel approaches to the interactive learning of parameters and the resolution of uncertain mappings. Extensive experiments are conducted and results show that our approach is highly effective.
- 16) Madhavan, Jayant, et al. "Harnessing the deep web: Present and future." arXiv preprint arXiv:0909.1785 (2009).
- Over the past few years, we have built a system that has exposed large volumes of Deep-Web content to Google.com users. The content that our system exposes contributes to more than 1000 search queries per-second and spans over 50 languages and hundreds of domains. The Deep Web has long been acknowledged to be a major source of structured data on the web, and hence accessing Deep-Web content has long been a problem of interest in the data management community. In this paper, we report on where we believe the Deep Web provides value and where it does not. We contrast two very different approaches to exposing Deep-Web content -- the surfacing approach that we used, and the virtual integration approach that has often been pursued in the data management literature. We emphasize where the values of each of the two approaches lie and caution against potential pitfalls. We outline important areas of future research and, in particular, emphasize the value that can be derived from analyzing large collections of potentially disparate structured data on the web.
- 17) Wu, Wensheng, AnHai Doan, and Clement Yu. "Webiq: Learning from the web to match deep-web query interfaces." 22nd International Conference on Data Engineering (ICDE'06). IEEE, 2006. Integrating Deep Web sources requires highly accurate semantic matches between the attributes of the source query interfaces. These matches are usually established by comparing the similarities of the attributes’ labels and instances. However, attributes on query interfaces often have no or very few data instances. The pervasive lack of instances seriously reduces the accuracy of current matching techniques. To address this problem, we describe WebIQ, a solution that learns from both the Surface Web and the Deep Web to automatically discover instances for interface attributes. WebIQ extends question answering techniques commonly used in the AI community for this purpose. We describe how to incorporate WebIQ into current interface matching systems. Extensive experiments over five realworld domains show the utility ofWebIQ. In particular, the results show that acquired instances help improve matching accuracy from 89.5% F-1 to 97.5%, at only a modest runtime overhead.
- 18) Lu, Yiyao, et al. "Annotating structured data of the deep Web." 2007 IEEE 23rd International Conference on Data Engineering. IEEE, 2007.
- An increasing number of databases have become Web accessible through HTML form-based search interfaces. The data units returned from the underlying database are usually encoded into the result pages dynamically for human browsing. For the encoded data units to be machine processable, which is essential for many applications such as deep Web data collection and comparison shopping, they need to be extracted out and assigned meaningful labels. In this paper, we present a multi-annotator approach that first aligns the data units into different groups such that the data in the same group have the same semantics. Then for each group, we annotate it from different aspects and aggregate the different annotations to predict a final annotation label. An annotation wrapper for the search site is automatically constructed and can be used to annotate new result pages from the same site. Our experiments indicate that the proposed approach is highly effective.
- 19) Li, Xian, et al. "Truth finding on the deep web: Is the problem solved?." Proceedings of the VLDB Endowment. Vol. 6. No. 2. VLDB Endowment, 2012.
- The amount of useful information available on the Web has been growing at a dramatic pace in recent years and people rely more and more on the Web to fulfill their information needs. In this paper, we study truthfulness of Deep Web data in two domains where we believed data are fairly clean and data quality is important to people's lives: Stock and Flight. To our surprise, we observed a large amount of inconsistency on data from different sources and also some sources with quite low accuracy. We further applied on these two data sets state-of-the-art data fusion methods that aim at resolving conflicts and finding the truth, analyzed their strengths and limitations, and suggested promising research directions. We wish our study can increase awareness of the seriousness of conflicting data on the Web and in turn inspire more research in our community to tackle this problem.
- 20) Seelos, Christian, and Johanna Mair. "Profitable business models and market creation in the context of deep poverty: A strategic view." The academy of management perspectives 21.4 (2007): 49-63.
- The bottom of the pyramid (BOP) in the global distribution of income has been promoted as a significant opportunity for companies to grow profitably. Under the BOP approach, poor people are identified as potential customers who can be served if companies learn to fundamentally rethink their existing strategies and business models. This involves acquiring and building new resources and capabilities and forging a multitude of local partnerships. However, current BOP literature remains relatively silent about how to actually implement such a step into the unknown. We use two BOP cases to illustrate a strategic framework that reduces managerial complexity. In our view, existing capabilities and existing local BOP models can be leveraged to build new markets that include the poor and generate sufficient financial returns for companies to justify investments.
This is a user sandbox of Joce Strad. You can use it for testing or practicing edits. This is not the sandbox where you should draft your assigned article for a dashboard.wikiedu.org course. To find the right sandbox for your assignment, visit your Dashboard course page and follow the Sandbox Draft link for your assigned article in the My Articles section. |
- ^ "Deep Web Search and Dark Web Search – Similar Names; Major Differences!". Brightplanet. October 12th, 2012.
{{cite web}}
: Check date values in:|date=
(help) - ^ "Clearing Up Confusion – Deep Web vs. Dark Web". Brightplanet. March 27, 2014.
- ^ "A MARKETER'S GUIDE TO THE DARK WEB". Marketing Insights.
- ^ "Assessing the extent and nature of wildlife trade on the dark web". Conservation Biology. 30.
- ^ "Assessing the extent and nature of wildlife trade on the dark web". Conservation Biology. 30.
- ^ "Cryptopolitik and the Darknet". Survival. 58.
- ^ "Concepts of illicit drug quality among darknet market users: Purity, embodied experience, craft and chemical knowledge". International Journal of Drug Policy. 35.
- ^ "The Dark Net: Policing the Internet's Underworld". World Policy Journal. 32.
- ^ "Large-Scale Monitoring for Cyber Attacks by Using Cluster Information on Darknet Traffic Features". Procedia Computer Science. 53.
- ^ "Buying drugs on a Darknet market: A better deal? Studying the online illicit drug market through the analysis of digital, physical and chemical data". Forensic Science International. 267.
- ^ "Buying drugs on a Darknet market: A better deal? Studying the online illicit drug market through the analysis of digital, physical and chemical data". Forensic Science International. 267.
- ^ "Characterising dark net marketplace purchasers in a sample of regular psychostimulant users". International Journal of Drug Policy. 35.