Monday, December 10, 2007

Software Methodologies

Software Engineering Methodologies

In software engineering and project management, a methodology is a codified set of practices (sometimes accompanied by training materials, formal educational programs, worksheets, and diagramming tools) that may be repeatably carried out to produce software.

Software engineering methodologies span many disciplines, including project management, analysis, specification, design, coding, testing, and quality assurance. All of the methodologies guiding this field are collations of all of these disciplines.

Methodology Versus Method

Disagreement exists regarding the relationship between the terms method and methodology. In common use, the methodology is frequently substituted for method; seldom does the opposite occur. Some argue this occurs because methodology sounds more scholarly or important than method. A footnote to the word methodology in the 2006 American Heritage Dictionary notes that, "the misuse of methodology obscures an important conceptual distinction between the tools of scientific investigation (properly methods) and the principles that determine how such tools are deployed and interpreted." In academia, the distinction between practice (i.e., method) and the philosophical basis for the practice (i.e., methodology) tends to be more clearly delineated.

In Software Engineering in particular, the discussion continues. One could argue that a software engineering method is a recipe, a series of steps, to build software, while a methodology is a codified set of recommended practices, sometimes accompanied by training materials, formal educational programs, worksheets, and diagramming tools. In this way, a software engineering method could be part of a methodology. Also, some authors believe that in a methodology there is an overall philosophical approach to the problem. Using these definitions, Software Engineering is rich in methods, but has fewer methodologies. There are two main stream types of methodologies: Structured Methodology (Information Engineering, SSADM and others), which encompass many methods and software processes; and Object Oriented Methodology (OOA/OOD and others).


In general, almost all methodologies can be classified into two

  1. Planned Development.
  2. Unplanned development.

Planned development methodologies insist that there should be non code artefacts that describe the system before actual development can start. That is, we will need to produce documents for requirements, architecture and design, test plans and test cases, before we actually start development. This approach has its advantages and is invaluable in distributed development or development following an onsite-offshore model. However, the draw back of this methodology is heavy investment upfront in time and material in planning the non-development activities. Another draw back is once development starts and change requests start getting implemented, most of the analysis and design time artefacts soon get out of sync. One may argue that this need not be the case always and he/she will not be wrong. There is no need why all artefacts can not be in sync and not just a Project Plan or a Requirement Traceability matrix. My own experience over the years has only proved that cases such as the latter are extremely few and far between considering even the process heavy CMM level 5 companies! The main problem is not because these processes do not offer vale. On the contrary they do. The problem is one of implementing these processes and ensuring that these are followed. Planned processes are generally so heavy weight that they are often delegated to oblivion, only to be dusted back to life when there is a quality audit.

Unplanned development relies on informal communication between the development team and the requirements team for building an application. The specifications are narrated like story. Development team starts building very simple building blocks first and continue building and refactoring as new requirements come in or existing requirements change. Refactoring is key in this type of development and implementations mostly pattern driven to allow efficient recapturing. This is the XP or eXtreme Programming methodology. XP also advocates Test Driven Development strongly. My own take on this is that while Test Driven development is the way to go, for teams that are not co-located, XP is far from ideal.

For development that involves an onsite-offshore model, a combination of planned and unplanned methodology yields the best results. The planned phases of a Rational Unified Process are invaluable for Requirement Gathering, Analysis and Design. However these phases are not really significant for the implementation discipline where Test Driven development advocated by XP makes a compelling case.

The most important takeaway from this discussion is that we need both planned and unplanned processes and we need a painless way of ensuring that these processes are followed, a process that is as lean on documentation as possible while allowing the processses to pass quality audits

Wednesday, November 7, 2007

Information Resoucse Management

Information Resource Management (IRM)
Information Resource Management is the concept that information is a major corporate resource and must be managed using the same basic principles used to manage other assets. This includes the effective management and control of data/information as a shared resource to improve the availability, accessibility and utilization of data/information within government, a ministry or a program. Data administration and records management are key functions of information resource management

The underlying philosophy behind Information Resource Management (IRM) is to design, inventory and control all of the resources required to produce information. When standardized and controlled, these resources can be shared and re-used throughout the corporation, not just by a single user or application.

There are three classes of information resources:
BUSINESS RESOURCES - Enterprises, Business Functions, Positions (Jobs), Human/Machine Resources, Skills, Business Objectives, Projects, and Information Requirements.
SYSTEM RESOURCES - Systems, Sub-Systems (business processes), Administrative Procedures (manual procedures and office automation related), Computer Procedures, Programs, Operational Steps, Modules, and Subroutines.
DATA RESOURCES - Data Elements, Storage Records, Files (computer and manual), Views, Objects, Inputs, Outputs, Panels, Maps, Call Parameters, and Data Bases.
These three classes of information resources provides the rationale as to why there are three complementary methodologies within "PRIDE".
ENTERPRISE ENGINEERING METHODOLOGY (EEM) - for defining the mission and goals of the business and the development of an Enterprise Information Strategy synchronized with the business.
INFORMATION SYSTEMS ENGINEERING METHODOLOGY (ISEM) - for designing and building enterprise-wide information systems (business processes crossing organizational boundaries). Software Engineering is considered a subset of ISEM.
DATA BASE ENGINEERING METHODOLOGY (DBEM) - to design and develop the corporate data base, both logically and physically.
Each methodology consists of a series of defined phases, activities and operations. Laced throughout the methodologies are defined deliverables and review points to substantiate completeness and to provide an effective dialog between management and developers. The methodologies promote design correctness and the production of a quality product.

Software Engineering

Software engineering (SE) is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software.

Typical formal definitions of software engineering are
"the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software".
"an engineering discipline that is concerned with all aspects of software production"
"the establishment and use of sound engineering principles in order to economically obtain software that is reliable and works efficiently on real machines"

Software is often found in products and situations where very high reliability is expected, even under demanding conditions, such as monitoring and controlling nuclear power plants, or keeping a modern airliner aloft. Such applications contain millions of lines of code, making them comparable in complexity to the most complex modern machines. For example, a modern airliner has several million physical parts (and the space shuttle about ten million parts), while the software for such an airliner can run to 4 million lines of code.

Current trends in software engineering
Software engineering is a young discipline, and is still developing. The directions in which software engineering is developing include:
Aspects help software engineers deal with -ilities by providing tools to add or remove boilerplate code from many areas in the source code. Aspects describe how all objects or functions should behave in particular circumstances. For example, aspects can add debugging, logging, or locking control into all objects of particular types. Researchers are currently working to understand how to use aspects to design general-purpose code. Related concepts include generative programming and templates.
Agile software development guides software development projects that evolve rapidly with changing expectations and competitive markets. Proponents of this method believe that heavy, document-driven processes (like TickIT, CMM and ISO 9000) are fading in importance. Some people believe that companies and agencies export many of the jobs that can be guided by heavy-weight processes. Related concepts include Extreme Programming and Lean software development.
Experimental software engineering is a branch of software engineering interested in devising experiments on software, in collecting data from the experiments, and in devising laws and theories from this data. Proponents of this method advocate that the nature of software is such that we can advance the knowledge on software through experiments only
Model Driven Software Development uses (both textual and graphical) models as primary development artifacts. By means of model transformation and code generation a part or complete applications are generated.
Software Product Lines
Software Product Lines is a systematic way to produce families of software systems, instead of creating a succession of completely individual products. This method emphasizes extensive, systematic, formal code reuse, to try to industrialize the software development process.

Software Engineering Today

In 2006, Money Magazine and rated software engineering as the best job in America in terms of growth, pay, stress levels, flexibility in hours and working environment, creativity, and how easy it is to enter and advance in the field

Software economics
As its name implies software economics is the economics of the software industry. It includes the production, marketing, sales, and support of products which are primarily software based.

Macro economics
The field of software is estimated to support a commercial software sector that earns $200 billion to $240 billion in the United States every year. Software engineering drove $1 trillion of economic growth in the U.S. over the last decade.
Micro economics
About 1/2 of all software projects are cancelled by users who change their minds, whether or not the software engineers would have succeeded.
About 1/4 of all software projects are unable to be delivered, due to changes in requirements, lack of time or resources, or other reasons.
About 1/4 of all software projects are delivered successfully.
Maintenance: Most (70% or more) software engineering effort over the total lifetime of a system goes into maintenance and upgrades.
Delivery: In the course of taking a large software project from conception to end user acceptance (and actual use) the cost of developing the software will typically range from 20-30% of the total. Other activities (documentation, Training infrastructure, Support infrastructure, Deployment and Network design, etc) account for the other 70-80%.
This explains why free software is not a major economic threat to commercial software. The cost of commercial software is only 20-30% of the cost to the company. If the commercial software comes with any guarantees about support or maintenance, it easily covers the cost. Most of the cost of software for a company or organization is in training, deployment, and support.

Wednesday, October 3, 2007

Sears, Roebuck and Company: Moving to network Computers

After successful test run, retail giant Sears, Roebuck and Co. is installing between 700 and 1,000 of IBM's Network Stations to run its delivery and customer-service applications. Steve Rutkowski, director of direct delivery systems at Sears, said the company turned to Network Station Series 1000, IBM's version of Sun's Java Station, because it packs a high-powered PC processing punch but is highly manageable and simple. "We think we see some benefits with this because we get all the capabilities of a PC without the full-blown expenses," Rutkowski said.

Sears wants to develop an internet-based application using Java that will let customers track their deliveries and service. That could eliminate about 40 percent of the calls to service representatives, according to Rutkowski, who noted that the company makes 4 million deliveries per year from more than 100 service centers across the country.

Observers said ready to go Java-enabled productivity applications, like Lotus Development's Java-based Esuite productivity software suite, which comes free with IBM's Netwrok Stations and a strong showing by pilot test users of network computers are assuring users that lower prices and support costs don't have to add up to limited functionality and slower performance. To date, those issues have been the biggest stumbling blocks to network computer adoption.

a. Why is Sears moving to network computers?
b. What are the business benefits and limitations of using network computers?

Wednesday, September 5, 2007

Most notorious viruses in PC history

Most notorious viruses in PC history
The computer virus has completed 25 years. The sinister computer programme that still gives computer users jitters has come a long way since the days of 'Elk Cloner', the first computer virus which started circulating in 1982. While some of the early viruses clogged networks, later ones corrupted or wiped documents or had other destructive properties. More recently, viruses have been created to steal personal data such as passwords or to create relay stations for making junk e-mail more difficult to trace. While the earliest viruses spread through floppy disks, the growth of the Internet gave a new way to spread viruses: e-mail. Today, viruses have found several platforms: instant-messaging, file-sharing software, rogue web sites; images etc. As these malicious programmes go more sophisticated and their numbers increase on a daily basis, here's a look into some of the most notorious virus attacks over the last twenty-five years.

Elk Cloner (1982)
Regarded as the first virus to hit personal computers worldwide, "Elk Cloner" spread through Apple II floppy disks. The programme was authored by Rich Skrenta, a ninth-grade student then, who wanted to play a joke on his schoolmates. The virus was put on a gaming disk, which could be used 49 times. On 50th time, instead of starting the game, it opened a blank screen that read a poem that read: "It will get on all your disks. It will infiltrate your chips. Yes it's Cloner! It will stick to you like glue. It will modify RAM too. Send in the Cloner!" The computer would then be infected. Elk Cloner was though a self-replicating virus like most other viruses, it bears little resemblance to the malicious programmes of today. However, it surely was a harbinger of all the security headaches that would only grow as more people get computers -- and connected them with one another over the Internet.

Brain (1986)
"Brain" was the first virus to hit computers running Microsoft's popular operating system DOS. Written by two Pakistani brothers, Basit Farooq Alvi and his brother Amjad Farooq Alvi, the virus left the phone number of their computer repair shop. The Brain virus was a boot-sector virus. It infected the boot records of 360K floppy disks. The virus would fill unused space on the floppy disk so that it could not be used. The first "stealth" virus, it hides itself from any detection by disguising the infected space on the disk. The virus is also known as Lahore, Pakistani and Pakistani Brain. BusinessWeek magazine called the virus the Pakistani flu. The brothers told TIME magazine they had written it to protect their medical software from piracy and it was supposed to target copyright infringers only.

Morris (1988)
Written by a Cornell University graduate student, Robert Tappan Morris, the virus infected an estimated 6,000 university and military computers connected over the Internet. Incidentally, Morris's father was a top government computer-security expert, The computers Morris invaded were part of the Arpanet, an international grid of telephone lines, buried cables, and satellite hookups established by the Department of Defense in 1969. Interestingly Morris later claimed that the worm was not written to cause damage, but to gauge the size of the Internet. An unintended consequence of the code, however, led to the damage caused.

Melissa (1999)
'Melissa' was one of the first viruses to spread over e-mail. When users opened an attachment, the virus sent copies of itself to the first 50 people in the user's address book, covering the globe within hours. The virus known as Melissa -- believed to have been named after a Florida stripper its creator knew -- caused more than $80m in damage after it was launched in March 1999. Computers became infected when users received a particular e-mail and opened a Word document attached to it. First found on March 26, 1999, Melissa shut down Internet mail systems at several enterprises across the world after being they got clogged with infected e-mails carrying the worm.

The worm was first distributed in the Usenet discussion group The creator of the virus, David Smith, was sentenced to 20 months imprisonment by a United States court.

Love bug (2000)
Travelling via e-mail attachments, "Love Bug" exploited human nature and tricked recipients into opening it by disguising itself as a love letter. The virus stunned security experts by its speed and wide reach. Within hours, the pervasive little computer programme tied up systems around the world. The virus which was similar to the earlier Melissa worm, spread via an e-mail with the tantalising subject line, "I Love You." When a recipient opened the attachment, the virus sent copies of itself to his entire address book. It then looked for files with .jpeg, .mp3, .mp2, .css and .hta extensions and overwrote them with itself, changing the extensions to .vbs or .vbe. These files then could not be retrieved in searches. The bug affected companies in Taiwan and Hong Kong -- including Dow Jones Newswires and the Asian Wall Street Journal. Companies in Australia had to close down their email systems to keep the virus from spreading (80 per cent of the companies in Australia reportedly got hit). The victims also included Parliaments of Britain and Denmark. In Italy, the outbreak hit almost the entire country. In the United States too, the e-mail systems were shut down at several companies.

Code Red (2001)

Said to be one of the most expensive viruses in history, the self-replicating malicious code, 'Code Red' exploited vulnerability in Microsoft IIS servers. Exploiting the flaw in the software, the worm was among the first few "network worms" to spread rapidly as they required only a network connection, not a human opening like attachment worms. The worm had a more malicious version known as Code Red II. Both worms exploited a bug in an indexing service shipped with Microsoft Window's NT 4.0 and Windows 2000 operating systems. In addition to possible website defacement, infected systems experienced severe performance degradation. The virus struck multiple times on the same machine. Code Red II affected organizations ranging from Microsoft to the telecom company Qwest to the media giant Associated Press. According to a research firm Computer Economics, the virus caused damage worth above $2 billion. Incidentally, Microsoft had issued a patch to fix the vulnerability almost a month earlier, however, most system operators failed to install it.

Blaster (2003)
'Blaster' (also known as Lovsan or Lovesan) took advantage of a flaw in Microsoft software. The worm alongwith 'SoBig' worm which also spread at the same time prompted Microsoft to offer cash rewards to people who helped authorities capture and prosecute the virus writers.The worm started circulating in August 2003. Filtering by ISPs and widespread publicity about the worm curbed the spread of Blaster. On August 29, 2003, Jeffrey Lee Parson, an 18-year-old from Hopkins, Minnesota was arrested for creating the B variant of the Blaster worm; he admitted responsibility and was sentenced to an 18-month prison term in January 2005.

Sasser (2004)
Another worm to exploit a Windows flaw, 'Sasser' led to several computers crashing and rebooting themselves. Sasser spread by exploiting the system through a vulnerable network port. The virus, which infected several million computers around the world, caused infected machines to restart continuously every time a user attempted to connect to the Internet. The worm also severely impaired the infected computer's performance.The first version of worm struck on April 30, 2004. The worms three modified versions have followed it since then, known as Sasser.B, Sasser.C and Sasser.D. The companies affected by the worm included the Agence France-Presse (AFP), Delta Air Lines, Nordic insurance company If and their Finnish owners Sampo Bank.

Thursday, July 26, 2007

the last saga of my favorite number

July is my most awaited month of the year... I expect something good most especially on the days which date has a 7.. the first 2 was really great... I really had a great time dealing with the moment that happened during those times.. But the latest was not that good... haay... how will I be able to survive all this things... I am actually suffering from this for about 4 to 5 years... I can't help looking back on the days that i found the happiness that i thought happiness... but as days goes by, i realized that it can not be found in just a single click... effort must be exerted.... two days ago, i had a realization about what someone told me... and i am really considering it right now.. i just pray that the Lord almighty will guide me in whatever journey that i will take... that i will be happy... for anyone who will be able to read this, thanks for your time... God bless you.. and yngat parati....