Key hardware for the World Wide Web (or just the Web) includes but is not exclusive to: Routers, Switches, Web Servers, Gateways, Fiber Nodes, Ethernet Cable, Fiber Cable, Satellites, and Radio. All of these are in some way involved in the process of transmitting and receiving internet traffic, on which is the medium of the Web. Software parts allow interaction with the media of the Web, this is done through Browsers, Server Software, and the different web scripting languages which allow Websites to be built and operate. In essence, Websites and Web media is the entire reason for the Web.
A primary technology which is already changing how people interact with the Web is Google Glass (Google, 2013). Google Glass is a technology which users can wear on their face and enables them to access the Web through a graphical heads up overlay. This technology will enable augmented reality to finally come into its own, thereby melding Web technology with real world interactions.
Web technology which is nearly there, but still in its infancy, is ubiquitous presence. That is, a single sign on which melds your home use with work use and on-the-go use. Google and Facebook are leading the charge in these areas, enabling users to access their application suites from anywhere using their account logins (Facebook, 2013). However, the problem with these technologies lies around the multiple logins required. A web service that allows users to combine all their myriad logins into a single login service, would go far in terms of enabling a greater depth of HCI on the web.
JNS Host is a web hosting service based in Queensland Australia (JNS Host, 2013). They offer three services, all of which are of the private level and not viable for enterprise. That being said, for a first time free service, they do offer quite a bit. The three services which make them stick out the most is their unlimited data transfer/bandwidth, PHP: Hypertext Preprocessor (PHP) support, and 20 MySQL database instances.
Data transfer limitation can be a huge nuisance for those sites that become popular, or offer high definition media which takes up a large amount of bandwidth. Examples of this is high definition images such as page graphics, personal photos, or large documents (such as those found in an e-Portfolio). In general, hosting sites that offer free space do not offer much in the way of data transfer/bandwidth. As such, JNS Host places itself in front of other free service providers with this amenity.
PHP is a scripting language which is similar to C, Java, and Perl, but embedded directly into the HTML of a page (The PHP Group, 2013). It enables Web developers to add programmatic logic to a site which is compiled and run on the Web server then rendered on the client’s browser. In essence, it allows dynamic interaction with site users. Moreover, PHP can interface with MySQL to enable database storage and retrieval.
JNS Host offers 20 database instances of MySQL, which is an open source relational database management system wholly owned by Oracle and widely used across the Web (Oracle Corporation, 2013). Combining the above unlimited bandwidth and PHP scripting capabilities, having support for MySQL built into the hosting package makes JNS Host highly desirable to any budding Web developer. In essence, MySQL allows for massive data management capabilities for free. Leveraging this utility allows for site intelligence gathering such as: user statistics, user interactions, and/or dynamic storage and retrieval of site content via the RDBMS.
The primary drawback to all of this “freeness” is in the terms and conditions, however. Site usage must remain constant and not go stagnant for longer than thirty days or the site will be deleted by JNS administrators. This means that any site hosted must be backed up to a local location or be subject to catastrophic failure in the future. Furthermore, storage space on the host is limited to 3GB, which by today’s standards is miniscule. Nevertheless, if a developer is building a throwaway sandbox site, JNS Host offers more than could really be asked for a free service.
The PHP Group. (2013). General Information. Retrieved September 3, 2013, from PHP: http://php.net
Raster versus Vector Graphics
Rasterization and Vectoring are two ways of creating images to display on the web. Each of these has pros and cons in the way they display images and in their overall implementation. Neither of which can be really said to be better than the other. However, each has its own unique purpose, reason, and time to be used.
A raster graphic is one that is made up of many pixels, or single square dots called a bitmap, much like the classical style of painting Pointillism (ArtCyclopedia, NA). In essence, primary colors are grouped together via dots in order to create secondary colors which when seen from afar produce an overall image. Since pixels are incredibly small, “very far” need only be a short ways away. Combining rasterized imaging with graphics compression, such as jpeg, can lead to highly detailed images with relatively small sizes (MSDNArchive, 2006).
Vector graphics are those images “where each line, curve, shape, and colour is mathematically defined” thereby allowing the image to be resized infinitely without losing image clarity (Encyclopædia Britannica, Inc, 2013). This means that the image in question is drawn using mathematical coordinates to assign both curves and color as a function. As math is the key ingredient, the file size of each vector image can be especially small, as the only data stored is the formulae which describe the graphic in question.
Rasterized graphics are primarily used for those images which contain high definition detail, for instance, photographs. These types of images are not suited for vector based imaging as the number of curves and color palettes required to render the image as a vector would be prohibitively expensive in terms of describing it mathematically. Additionally, the amount of detail for a photograph may not need to be as great as a vector image would require. Image compression on a rasterized image still leaves a graphic that is clearly visible to the human eye, and yet small enough to be transmitted across the web (MSDNArchive, 2006).
Vectored graphics, on the other hand, are used for clearly defined shapes, or images with little differentiation between one area and another, e.g. logos or text. These types of graphics when rendered via vectoring are usually much smaller and cleaner than when rendered using rasterization. Additionally, these graphics can be resized indefinitely without losing image precision. For instance, zooming in on this text will not make the text blurry or incomprehensible (as long as it is viewed in Word and not via a rasterized PDF).
As hinted at, the primary drawback of raster images is in their inability to be zoomed in indefinitely. After a point, a rasterized image begins to loose coherence by the human eye and all that can be seen are pixilated shapes, which do not make much sense as an image. Conversely, vectoring is no good at rendering highly detailed images, such as the photograph of a tree. The mathematical computations for such images are inherently large and thus remove the primary reason for vectoring, small file size.
From this can be gathered the situations in which each imaging type should be used. Rasterization should be used when dealing with highly detailed or complex images which can be compressed using such methods as jpeg. Vectoring should be used when creating graphics which using simple geometric shapes and need to remain clear even when resized beyond standard proportions. Each has its use, it is simply a matter of knowing when, or when not, to use them.
Sebesta, R. (20013). Programming the World Wide Web (7th ed.). Boston: Pearson.
HTML Frames and their Popularity
The primary use of frames is to segregate different parts of the rendered browser by their use and to cut down on the loading time of different elements across multiple pages (W3C, n.d.). For instance, having a navigation pane in one frame, an information pane in another pane, and a content pane in yet another. The content can change as required by the navigation and/or information pane, while the navigation and content frames stay exactly the same. In such a way, rendering the elements in these two frames needs to happen only once, upon initial load, rather than every time the site content needs to be updated.
Where frames go astray, and where they are considered the most dangerous, is when they get turned towards nefarious uses. Enter the clickjacker, or clickjacking (Atwood, 2009). Clickjacking is essentially loading one page which is set to invisible on top of another page. The bottom page, which is visible, shows user authenticated rendering such as Faceplace or Twixter. Unbeknownst to the user, they click on a menu, link, or picture thinking to get access to whatever hyper content it entails, only to have their click hijacked and browsing experience rerouted to a new site altogether. In such a way, frames open up a whole new level of security breaches for which novice users are entirely unprepared.
From my personal experience with webpages, I cannot really say that they are becoming “more popular.” That being said, it is hard to deny the usefulness of good frame. Google use them, Facebook uses them, and even Twitter uses them. All three of these sites are big names in the current generation of the Web, however, three sites do not make an overall web trend. What does make a web trend is the sheer number of pages devoted to talking about frames. Using a Google search of “html frames” 135,000,000 results are returned (Google, n.d.). As such, the popularity of talking about using html frames is nothing to balk about.
Cookies only pose a privacy/security risk for unwary users. That is, users who do not understand what a persistent session means. For instance, a site which stores a user’s browsing history for use in a form persistently can leave personal data open to the next user of that particular computer. This is especially relevant when a user uses a public terminal at a café or library.
This being said, the advantages of cookies can outweigh their risks. Not having to retype the same fields each time a user enters that particular web page, for instance. However, cookies can only store a finite amount of data, which means that developers must pick and choose what fields to automatically track, rather than automating any redundant fields they please.
With this in mind, the advantages definitely outweigh any disadvantages cookies may pose. Particularly with modern browsers which alert when cookies are being used. As long as the user is aware their input can be tracked, the risk of using cookies for trusted sites is minimal. Simply put, users should not accept cookies from untrusted strangers.
Online Marketing: Plush Packet Incorporated (PPI)
Online marketing is a booming business with entire organizations basing their business model around the distribution of user time via targeted advertisements (York, 2011). In order for PPI to remain competitive on the modern web, their advertising media must take a form that is both non-intrusive and eye catching. That is, a customer who perceives an add as being “annoying” is less likely to follow the advertised product than one who finds the add entertaining and/or enjoyable. As such, PPI should consider animated gifs and user manipulable flash advertisements.
First and foremost, flash advertisements are a worrisome advertising medium as a large majority of PPI’s audience will likely have a flash blocker installed. That is, the add will never be seen as the user’s browser will block the flash advertisement from ever being shown. In such a way, these types of advertisements should only be published on sites users trust such as YouTube or news sites like News.com.au. A further consideration must be made on how these flash advertisements interact with the user.
A flash media clip that automatically plays when a user enters a site will be considered annoying and frustrating, thus turning customers away from PPI’s products (Lomas, 2012). Instead, consider using as static flash introduction screen, whereby the initial purpose of the advertisements is clearly defined without animation. If the user wishes to gain more insight into the product, they can interact with the flash object, thereby putting “control” in their hands. This will, in turn, lead to greater consumer satisfaction, rather than disgruntled “potential” customers.
For those sites where Flash is not a good idea, animated GIF images can be used instead. This type of media plays a series of image frames, without sound, which can transmit the required commercial message without being considered as intrusive as auto play flash media. In essence, it gives PPI the ability to have an active banner, without causing undue annoyance to PPI’s customers. The only downside to the animated gif image, is that it loads each frame in its animation as it appears, thus a large gif can load fairly slowly, making the overall animation choppy.
Both of these options make the assumption that the advertising banner PPI is to use will be animated. If the image is static, on the other hand, than the choice of image type comes down to how the image will be used. For instance, if the banner contains strictly text and is only being used to relay a message, then a vector based image would be the best choice due to its relatively small file size. On the other hand, if the image contains many graphics, then a png or jpg file should be considered due to their smaller file size over vectoring such images.
Overall, PPI must never alienate their customers. A happy customer is more likely to purchase PPI’s products than a customer who is consistently and annoyingly bombarded with unwanted advertisements. If PPI need a good example of this, simply think of all the spam mail that gets shoved in junk mail folders, these advertisements are never seen. This is exactly what PPI should avoid. Interactive flash with user choice, or silent moving unobtrusive images. Web advertising can be made to look good and make customers happy to purchase PPI’s products too.
Huntley, R. (2013, July 16). Annoying, but effective: the power of online advertising. Retrieved September 17, 2013, from BRW: http://www.brw.com.au
Lomas, N. (2012, October 24). Online Ad Survey: Most U.S. Consumers "Annoyed" By online Ads. Retrieved September 17, 2013, from Tech Crunch: http://techcrunch.com
Sebesta, R. (20013). Programming the World Wide Web (7th ed.). Boston: Pearson.
York, J. C. (2011, July 13). We are Google's Product, Not its Customers. Retrieved September 17, 2013, from AlJazeera: http://www.aljazeera.com ________________________________
Web Accessibility: Tools and Guidelines One of the very best sources for ensuring a site conforms to accessibility standards for people around the world is the Web Content Accessibility Guidelines (WCAG) 2.0 (W3C, 2008). The WCAG ensures web developers build their sites with impaired users, as well as unimpaired, in mind. That is, a developer who follows the WCAG will build a site that conforms to both standard browsers, such as IE, as well as accessibility browsers like WebbIE (WebbIE, NA).
A great example of this is adding alternative text to both images and tables (W3C, 2013). In fact, anywhere in a page where an accessibility browser may render the page in a confusing manner, alt text should be used to help users navigate easier. For instance, WebbIE may render a site’s navigation in a different location than what is visually seen in a standard browser. This is because the site is rendered based on mark-up location and then read back to the user. Placing alt text at the top of pages explaining this, as well as a link to the correct navigation, will assist greatly in accessibility.
Another point of concern is page navigation for physically disabled users. All navigation points on a site should be tab-able, or failing that, alternative navigation options should be offered. An example of this is a navigation drop down pane which works brilliantly with a mouse. However, keyboard tabbing is inherently flawed due to the way CSS operates. A simple solution to this is providing a top level link to a secondary page which contains the entire site navigation. Thus impaired users still have access to all content on a site, without drastically altering site themes.
While the web has been one of the greatest boons for hearing impaired persons, there are still instances where sound is a primary interaction within the media. An example of this is going to YouTube.com and trying to watch any clip on the site without sound. YouTube has overcome this through closed captioning. However, the transcripts for these clips must be uploaded by each media owner. Ensuring these transcripts are not overlooked should be a huge concern for any developer loading media to the web.
Sight, touch, and sound, three important concerns for which there are easily available solutions for any developer to implement. The web is about increasing information dissemination across boundaries, social classes, and impairments. By neglecting the last of these, web developers are ignoring a critical portion of the web’s user base. A portion which could contain someone as important as Stephen Hawking or Christopher Reeve.
W3C. (2008, December 11). Web Content Accessbility Guidelines (WCAG) 2.0. Retrieved October 29, 2012, from W3C: http://www.w3.org
Personally, I prefer the ease of using Dreamweaver and Illustrator, although that is likely because I have been using them for years. These two applications are inherently robust and can take quite a while to learn for novice users. However, they are powerful tools that provide in-depth manipulation of both websites (Dreamweaver) and visual media (Illustrator).
Dreamweaver allows a developer to build their HTML code and watch as it renders live as they write. Additionally, it interfaces with multiple different browsers (e.g. Internet Explorer, Firefox, Chrome, or Opera), enabling the developer to test code on the fly for different browsing platforms. This enables the greatest level of site compatibility with the least hassle. Foremost, however, is Dreamweaver’s ability to validate and check code against HTML standards, before the developer publishes their work.
Graphical manipulation is a key component of any web media these days. Illustrator brings professional level media creation to the fingertips of developers. In essence, raster and vector images can be created that are compatible with all sites. Additionally, Illustrator is able to vectorise raster images, thereby allowing media manipulation across extension boundaries. For instance, being able to take a logo built in Photoshop and vectorise it for web use.
All Adobe products have student available versions. Moreover, Walden offer a discount for students and faculty for Adobe Creative cloud (CDW-G, 2013).
Free software is definitely available which mimics these two pieces of software to an extent. OneXtraPixel offer an entire list of free, open source products which are just as good as Dreamweaver, in some places (Huang, 2013). However, any developer should heed carefully the terms “free” and “open source” as they are generally synonymous with “support lacking” and “not compatible.” For instance, the first application on OneXtraPixel list is the application Quanta Plus. This application is for Linux systems only, which means if the developer has a Windows based OS, this software will not work.
Several media development platforms are available which act as alternatives to Illustrator, most notably, Serif Draw Plus and Creative Docs.Net (Creative Docs.Net, 2013) (Serif, 2013). Both of these software packages offer developers the ability to create and modify vector based graphic images. However, Creative Docs.Net does not offer the ability to convert a raster image to vector, while Draw Plus does. The primary downside to both is that they are both Windows based applications, meaning Mac and Linux developers would be at a loss.
Overall, my personal opinion is that Adobe offer the best creative suite on the market. It is expensive, nevertheless, it is worth it for the professional quality that the software provides. Additionally, in a professional environment, Adobe will be the only supported software available as its support packages and software updates are constant and up to date. While not everything, this gives a measurable fact that the software in question will ensure developed sites and media are as current as monetarily possible.
Web 2.0, the Semantic Web, and the Future of Web Development
Web 2.0 and the semantic web are both made up of differing technologies that give greater interaction between desktop localized interfaces and pervasive web interfaces, technologies like: PHP, MySQL, AJAX, ASP.Net, and Ruby on Rails. Each of these technologies expands on already existing paradigms allowing for greater interaction between users and web applications. That is, websites themselves already provide a form of one way communication. These technologies expand on that allowing two way, and multi-way communications.
The web programming platforms such as PHP and ASP.NET give web developers the tools to build sites that can securely interact with users without posting large amounts of proprietary code to their browsers. In essence programming is kept on the server side, only sending out the rendered HTML code required to view a given site when a user interacts with the web application. This is the embodiment of client-server architecture brought into the 21st century.
AJAX and Ruby work much in the same way. However, their goal is one more of asynchronistic updates of web applications along with interfacing with databases such as MySQL. From this web developers can create high speed applications that store and retrieve huge amounts of data without affecting the overall user browsing experience. In turn, users do not need to wait for web sites to respond when interacting with them, instead they can multitask within a site, doing several different things at once.
Another tool which is advancing the Web is HTML5. This new markup standard allows developers to embed media into their content with much greater ease than ever before. While it is not going to completely replace tools like Adobe Flash, it will cut back on the amount of Flash a developer must know. For instance, embedding a media clip in a web page only takes a slight amount of HTML5 markup, whereas it would require a great deal of knowledge about Flash to do so the old way. However, Flash is still a powerful tool in terms of creating rich media centric web applications focused on user interaction.
These technologies have enabled Web 2.0 to gain traction in terms of connecting vast user bases with each other through collaborative means (e.g. Wikipedia, Facebook, or Twitter). The web of the 90s gave way to the web of the naughts, whereby memes sharing, collaboration and massive information stores were the commonality. Where the 90s paved the way with static single request, single return pages. Web 2.0 is all about multiple request, multiple return sites. In essence, information overload.
More and more information is now being shared through cloud computing. These are platforms from which desktop applications can be installed and run at any time from any machine via the web. Take Google Docs, for instance (Google, 2013). Formerly users would require an entire document suite of products be installed on any computer they accessed. However, using Google Docs, users can access office applications at any time via the web.
The challenge of this is in securing confidential or private information. The web is an inherently insecure medium. Having private information stored in a shared database means that anyone who has access to that cloud medium is potentially a threat to everyone else’s stored data. A not so great example of this is the recent hacking of Adobe’s systems (Krebs, 2013). Supposedly secure cloud stored data was breached and stolen by likely nefarious individuals. The possible ramifications of such are still in the works, but most notably any “customer names, encrypted payment card numbers, expiration dates and information relating to orders” have been stolen, all of which were stored in the cloud (Krebs, 2013).
Further to this is how information will be delivered in the semantic web, or Web 3.0. Organizations are now defining, cataloging, and portioning out information to their users. Rather than being inundated with massive amounts of data, users will begin to see their searches and their web application interaction more finely tuned towards their own personal habits and requirements. In a way, the web is shrinking.
Eli Pariser did a Ted talk recently about how we should all beware online “filter bubbles” (Pariser, 2011). That is, sites like Google and Facebook are constantly and consistently filtering search results and what content they show their users, based on what their users browsing habits are. This in turn means that users are less likely to see mentally engaging data such as political upheavals in the middle east, and more likely to see their favorite online jokes instead. Thus user’s worldview is limited to what they themselves care about, rather than what is really going on around them.
Increasingly, web technologies are expanding web interactivity. This in turn is leading the way towards a more pervasive “always on” in the cloud mentality. However, security cannot be forgotten in the headlong rush towards greater user experiences. Moreover, a filtered web has the potential to break the entire purpose of this medium, which is the free flow sharing of diverse information around the world.