Monday, June 16, 2014

Testing DWH Strategy

Few steps:

Analyze source documentation

As with many other projects, when testing a data warehouse implementation, there is typically a requirements document of some sort. These documents can be useful for basic test strategy development, but often lack the details to support test development and execution. Many times there are other documents, known as source-to-target mappings, which provide much of the detailed technical specifications. These source-to-target documents specify where the data is coming from, what should be done to the data, and where it should get loaded. If you have it available, additional system-design documentation can also serve to guide the test strategy.

Develop strategy and test plans

As you analyze the various pieces of source documentation, you'll want to start to develop your test strategy. I've found that from a lifecycle and quality perspective it's often best to seek an incremental testing approach when testing a data warehouse. This essentially means that the development teams will deliver small pieces of functionality to the test team earlier in the process. The primary benefit of this approach is that it avoids an overwhelming "big bang" type of delivery and enables early defect detection and simplified debugging. In addition, this approach serves to set up the detailed processes involved in development and testing cycles. Specific to data warehouse testing this means testing of acquisition staging tables, then incremental tables, then base historical tables, BI views and so forth. Another key data warehouse test strategy decision is the analysis-based test approach versus the query-based test approach. The pure analysis-based approach would put test analysts in the position of mentally calculating the expected result by analyzing the target data and related specifications. The query-based approach involves the same basic analysis but goes further to codify the expected result in the form of a SQL query. This offers the benefit of setting up a future regression process with minimal effort. If the testing effort is a onetime effort, then it may be sufficient to take the analysis-based path since that is typically faster. Conversely, if the organization will have an ongoing need for regression testing, then a query-based approach may be appropriate.

Test development and execution

Depending on the stability of the upstream requirements and analysis process it may or may not make sense to do test development in advance of the test execution process. If the situation is highly dynamic, then any early tests developed may largely become obsolete. In this situation, an integrated test development and test execution process that occurs in real time can usually yield better results. In any case, it is helpful to frame the test development and execution process with guiding test categories. For example, a few data warehouse test categories might be:

•          Record counts (expected vs. actual)
•          Duplicate checks
•          Reference data validity
•          Referential integrity
•          Error and exception logic
•          Incremental and historical process
•          Control column values and default values


Strategies for Testing Data Warehouse Applications
Businesses are increasingly focusing on the collection and organization of data for strategic decision-making. The ability to review historical trends and monitor near real-time operational data has become a key competitive advantage.
This article provides practical recommendations for testing extract transform and load (ETL) applications based on years of experience testing data warehouses in the financial services and consumer retailing areas. Every attempt has been made to keep this article tool-agnostic so as to be applicable to any organization attempting to build or improve on an existing data warehouse.
There is an exponentially increasing cost associated with finding software defects later in the development lifecycle. In data warehousing, this is compounded because of the additional business costs of using incorrect data to make critical business decisions. Given the importance of early detection of software defects, let's first review some general goals of testing an ETL application:
  • Data completeness: Ensures that all expected data is loaded into target tables.
  • Data transformation: Ensures that all data is transformed correctly according to business rules/mapping rules and/or design specifications or based on ETL mapping document.
  • Data quality: Ensures that the ETL application correctly rejects, substitutes default values, corrects or ignores and reports invalid data. We need to report failure records back to business team
  • Performance and scalability: Ensures that data loads and queries perform within expected time frames and that the technical architecture is scalable.
  • Integration testing: Ensures that the ETL process functions well with other upstream and downstream processes.
  • User-acceptance testing: Ensures the solution meets users' current expectations and anticipates their future expectations.
  • Regression testing: Ensures existing functionality remains intact each time a new release of code is completed.

Data Completeness
One of the most basic tests of data completeness is to verify that all expected data loads into the data warehouse. This includes validating that all records, all fields and the full contents of each field are loaded. Strategies to consider include:
  • Comparing record counts between source data, data loaded to the warehouse and rejected records.
  • Comparing unique values of key fields between source data and data loaded to the warehouse. This is a valuable technique that points out a variety of possible data errors without doing a full validation on all fields.
  • Utilizing a data profiling tool that shows the range and value distributions of fields in a data set. This can be used during testing and in production to compare source and target data sets and point out any data anomalies from source systems that may be missed even when the data movement is correct.
  • Populating the full contents of each field to validate that no truncation occurs at any step in the process. For example, if the source data field is a string (30) make sure to test it with 30 characters.
  • Testing the boundaries of each field to find any database limitations. For example, for a decimal (3) field include values of -99 and 999, and for date fields include the entire range of dates expected. Depending on the type of database and how it is indexed, it is possible that the range of values the database accepts is too small.

Data Transformation
Validating that data is transformed correctly based on business rules can be the most complex part of testing an ETL application with significant transformation logic. One typical method is to pick some sample records and "stare and compare" to validate data transformations manually. This can be useful but requires manual testing steps and testers who understand the ETL logic. A combination of automated data profiling and automated data movement validations is a better long-term strategy. Here are some simple automated data movement techniques:
  • Create a spreadsheet of scenarios of input data and expected results and validate these with the business customer. This is a good requirements elicitation exercise during design and can also be used during testing.
  • Create test data that includes all scenarios. Elicit the help of an ETL developer to automate the process of populating data sets with the scenario spreadsheet to allow for flexibility because scenarios will change.
  • Utilize data profiling results to compare range and distribution of values in each field between source and target data.
  • Validate correct processing of ETL-generated fields such as surrogate keys.
  • Validate that data types in the warehouse are as specified in the design and/or the data model.
  • Set up data scenarios that test referential integrity between tables. For example, what happens when the data contains foreign key values not in the parent table?
  • Validate parent-to-child relationships in the data. Set up data scenarios that test how orphaned child records are handled.

Data Quality
For the purposes of this discussion, data quality is defined as "how the ETL system handles data rejection, substitution, correction and notification without modifying data." To ensure success in testing data quality, include as many data scenarios as possible. Typically, data quality rules are defined during design, for example:
  • Reject the record if a certain decimal field has nonnumeric data.
  • Substitute null if a certain decimal field has nonnumeric data.
  • Validate and correct the state field if necessary based on the ZIP code.
  • Compare product code to values in a lookup table, and if there is no match load anyway but report to users.
Depending on the data quality rules of the application being tested, scenarios to test might include null key values, duplicate records in source data and invalid data types in fields (e.g., alphabetic characters in a decimal field). Review the detailed test scenarios with business users and technical designers to ensure that all are on the same page. Data quality rules applied to the data will usually be invisible to the users once the application is in production; users will only see what's loaded to the database. For this reason, it is important to ensure that what is done with invalid data is reported to the users. These data quality reports present valuable data that sometimes reveals systematic issues with source data. In some cases, it may be beneficial to populate the "before" data in the database for users to view.

Performance and Scalability
As the volume of data in a data warehouse grows, ETL load times can be expected to increase, and performance of queries can be expected to degrade. This can be mitigated by having a solid technical architecture and good ETL design. The aim of the performance testing is to point out any potential weaknesses in the ETL design, such as reading a file multiple times or creating unnecessary intermediate files. The following strategies will help discover performance issues:
  • Load the database with peak expected production volumes to ensure that this volume of data can be loaded by the ETL process within the agreed-upon window.
  • Compare these ETL loading times to loads performed with a smaller amount of data to anticipate scalability issues. Compare the ETL processing times component by component to point out any areas of weakness.
  • Monitor the timing of the reject process and consider how large volumes of rejected data will be handled.
  • Perform simple and multiple join queries to validate query performance on large database volumes. Work with business users to develop sample queries and acceptable performance criteria for each query.

Integration Testing
Typically, system testing only includes testing within the ETL application. The endpoints for system testing are the input and output of the ETL code being tested. Integration testing shows how the application fits into the overall flow of all upstream and downstream applications. When creating integration test scenarios, consider how the overall process can break and focus on touch points between applications rather than within one application. Consider how process failures at each step would be handled and how data would be recovered or deleted if necessary.
Most issues found during integration testing are either data related to or resulting from false assumptions about the design of another application. Therefore, it is important to integration test with production-like data. Real production data is ideal, but depending on the contents of the data, there could be privacy or security concerns that require certain fields to be randomized before using it in a test environment. As always, don't forget the importance of good communication between the testing and design teams of all systems involved. To help bridge this communication gap, gather team members from all systems together to formulate test scenarios and discuss what could go wrong in production. Run the overall process from end to end in the same order and with the same dependencies as in production. Integration testing should be a combined effort and not the responsibility solely of the team testing the ETL application.

User-Acceptance Testing
The main reason for building a data warehouse application is to make data available to business users. Users know the data best, and their participation in the testing effort is a key component to the success of a data warehouse implementation. User-acceptance testing (UAT) typically focuses on data loaded to the data warehouse and any views that have been created on top of the tables, not the mechanics of how the ETL application works. Consider the following strategies:
  • Use data that is either from production or as near to production data as possible. Users typically find issues once they see the "real" data, sometimes leading to design changes.
  • Test database views comparing view contents to what is expected. It is important that users sign off and clearly understand how the views are created.
  • Plan for the system test team to support users during UAT. The users will likely have questions about how the data is populated and need to understand details of how the ETL works.
  • Consider how the users would require the data loaded during UAT and negotiate how often the data will be refreshed.

Regression Testing
Regression testing is re validation of existing functionality with each new release of code. When building test cases, remember that they will likely be executed multiple times as new releases are created due to defect fixes, enhancements or upstream systems changes. Building automation during system testing will make the process of regression testing much smoother. Test cases should be prioritized by risk in order to help determine which need to be rerun for each new release. A simple but effective and efficient strategy to retest basic functionality is to store source data sets and results from successful runs of the code and compare new test results with previous runs. When doing a regression test, it is much quicker to compare results to a previous execution than to do an entire data validation again.

Taking these considerations into account during the design and testing portions of building a data warehouse will ensure that a quality product is produced and prevent costly mistakes from being discovered in production.  

Thursday, June 12, 2014

UNIX


UNIX is a CUI operating system. Operating system is an interface between hardware and application software’s. It serves as the operating system for all types of computers including single-user personal computers and engineering work stations, multi-user micro computers, mini computers and super computers as well as special purpose devices. The number of computers running in variant of UNIX has grown explosively, with approximately 20 million computers and more than 100 million people using these systems.  The success of UNIX is due to many factors, including its portability to wide range of machines, its adaptability and simplicity, the wide range of tasks that it can perform, its multi-user and multi tasking nature, and suitability for networking, which has become increasingly important as the internet has blossomed.


A UNIX Biography
The origin of UNIX can be traced back to 1965, when a joint venture was undertaken by AT & T Bell laboratories, the General electric company and Massachusetts Institute of Technology and software team lead by Ken Thompson, Dennis Ritchie, Rudd Canday and Brain Kernighan worked on MULTICS project, (stands Multiplex Information and Computing System/Service). The aim was to develop an operating system that could serve a large community of users and allow them to share the data if need be. MULTICS was developed for only 2 users. Based on the concept in 1969, UNICS (stands Uniplexed Information and Computing System) operating system was developed for 100 users. The first version was armed with a museum computer called PDP-7 and later on PDP-11 which was written in Assembly language. All its assembly code being machine independent, the version was not portable, a key requirement for successful OS.
To remedy this, Ken Thompson created a new language ‘B’ came from BCPL (Basic combined programming language) and set about the herculean task of rewriting the whole UNICS code in this high level language. ‘B’ lacked in several aspects necessary for real life programming. Ritchie shifted the inadequacies of B and modified in to a new language which he named as ‘C’. In 1973, whole UNICS code was rewritten in ‘C’ language and named it as UNIX.
HP –UX -> HEWLETT PACKARD
SUN SOLARIS -> SUN MICROSYSTEM
IBM-AIX -> IBM
BSD -> Berkeley Software Distribution
IN 1991, LINUS TORVALD
HELSINKY UNIVERISTY, FINLAND
LINUX – GNU
FLAVOURS OF THE LINUX OPERATING SYSTEM:
1.      RED HAT
2.      SUSE
3.      FEDORA
4.      UBUNTU

Hardware Requirements for UNIX:
·         Minimum 80 MB of Hard disk and 4 MB of RAM.   
·         Any 80286 and above processor
·         UNIX requires about 1 MB of RAM for each extra terminal connected.

Salient Features of UNIX
Multi user Capability:
Multi user operating system means more than one user shares the same system resources (hard disk, memory, printer, application software etc.) at the same time.
Multi tasking Capability:

Another highlight of UNIX is that it is multi tasking, implying that it is capable of carrying out more than one job at the same time. It allows you type a program in its editor while it simultaneously executes some other command you might have given earlier, say to sort and copy a huge file. The latter job is performed in the background while in the foreground you use editor, or take a directory list or whatever else. Depending on the priority of the task, the operating system appropriately allots small time slots (of the order of millisecond or microseconds) to each foreground and background task.
Programming facility:
UNIX o/s provides shell. Shell works like a programming language. It provides commands and keywords. By running these two, user can prepare efficient program.
Portability:
One of the main reasons for the universal popularity of Unix is that it can be ported to almost any computer system, with only bare minimum of adoptions to suit the given computer architecture. It works with 80286 processors to super computers.
Communication:
UNIX provides electronic email. The communication may be within the network of a single main computer, or between two or more such computer networks. The user can easily exchange mail, data, and programs through such networks. You may be two feet away or at two thousand miles your mail with hardly take any time to reach its destination.
Security:
UNIX provides three level of security to protect the data. The first is by provided by assigning passwords and login names to individual users ensuring that nobody can come and have access to your work. At the file level, there are read, write and execute permissions to reach a file which decide who can access a particular file, who can modify it and who can execute it. Lastly, there is a file encryption. This utility encodes your file into an unreadable format so that even if someone succeeds in opening it, your secrets are safe.
Open system:
The source code for the UNIX system and not just the executable code have been made available to users and programmers. Because of this many people have been able to adapt the UNIX system in different ways. This openness has led to introduction of wide range of new features and versions customized to meet special needs. It has been easy for developers to adapt to UNIX, because the computer code for the UNIX system is straight forward, modular and compact.
System calls:
Programs interact with the kernel through approximately 100 system calls. System calls tell the kernel to carry out various tasks for the program, such as opening a file , writing a file, obtaining information about a file, executing a program, terminating the process , changing the priority of a process and getting the time of a day. Different implementations of UNIX system have compatible system calls, with each call having the same functionality. However, the internal programs that performs the functions of the system call (usually written in C language).
Help facility:
UNIX provides manual pages for UNIX commands.




Differences of UNIX with WINDOWS
UNIX
 Windows
Unix is a multi-user o/s.
Windows is a multi-user o/s.
Multi tasking o/s.
Multi tasking o/s.
To boot the UNIX o/s 2MB RAM is required.
To boot the Windows o/s 12MB RAM is required.
Unix is process based concept.
Windows is process thread based concept.
In Unix, for every user requests it creates new process
For number of users request it creates only process.
In Unix, any process is killed it will not effect the other users.
It effects to all users.
Can run more than 1, 00,000 transactions per minute.
Maximum number of transactions in windows is 80,000 per minute.
There is no limit for number of users working with the server.
Limited number of users.
Unix is an open system.
Windows is closed system.
Unix is a portable o/s.
No portability
Unix provides programming facility.
No programming facility.
It is CUI.
Windows is GUI.
Unix is not user friendly.
It is user friendly.



UNIX System Organization:


·         Manned in three levels.
·         Heart of UNIX is Kernel.
·         Kernel interacts with the hardware.
·         Communication will be carried by 2nd layer which is SHELL. It is a command line interpreter.
·         Third layer is user applications.
·         Kernel generally stored in a file UNIX.

SHELL:
The shell reads your commands and interprets them as requests to execute program. Because the shell plays this role, it is a command line interpreter. Besides being a command interpreter, the shell is a programming language. As a programming language, it permits you to control how and when the commands are carried out. Shell acts as an interface between user and the kernel.
KERNEL:
The kernel is the part of the operating system that interacts directly with the hardware of a computer, through device drivers that are built into the kernel. It provides set of services that can be used by programs, insulating these programs from underlying hardware. The major functions of these programs from the underlying hardware. The major functions of the kernel are to manage computer memory, to control access to the computer, to maintain file system, to handle interrupts (signals to terminate execution), to handle errors, to perform input and output services (which allow computers to interact with terminals, storage devices, and printers), and to allocate the resources of the computer (Such as the cpu or i/o devices) among users. Programs interact with the kernel through approximately 100 system calls. System calls tell the kernel to carryout various tasks for the program, such as opening a file, writing to a file, obtaining the information about a file, executing a program, terminating a process, changing the priority of a process, and getting the time of a day.
The UNIX File system:
A file is a basic structure used to store information on the UNIX system. Before we learn any more UNIX commands it is essential to understand the UNIX file system since UNIX treats everything it knows and understands as file. All the utilities, applications, data in UNIX stored as files. Even directory can be called as a file which contains several other files. The UNIX file system resembles an upside down tree. Thus the file system begins with a directory called root. The root directory is denoted as slash (/).Branching from the root there are several other directories called bin, lib, dev, usr, temp and etc. There are three different types of files.
1. Regular /ordinary files                   2. Directory files         3.Special files
1. Regular/ordinary files:
As a user, the information that you work with will be stored as an ordinary file. Ordinary files are aggregates of characters that are treated as a unit by the UNIX system. An ordinary file can contain normal ASCII characters such as text for manuscripts or programs. Ordinary file can be created, changed, or deleted as you wish.
2. Directory files:
Directory is a file that holds other files and contains information about the locations and attributes of these other files. For example, a directory includes a list of all the files and sub directories that it contains, as well as their addresses, characteristics, file types (whether they are ordinary files, symbolic links, directories or special files), and other attributes.
3. Special files:
A special file represents a physical device. It may be a terminal, a communication devices, or storage unit such a disk drive. Special files are of two types, one is block special file—CD-Rom, printer, floppy which are unreadable format and the other is character special file—STDIN, STDOUT and STDERR which are readable format.
UNIX System Organization
/ (root) 
 
          /bin            /lib            /dev            /usr          /tmp             /etc                /var                  /home 
/ (root):  This is the root directory of the entire file system and the root directory of the super user.   
/bin:  bin stands for binary. This directory contains executable files for most of the UNIX commands. UNIX commands can be either C programs or shell programs. Shell programs are nothing but a collection of several UNIX commands.
/lib: This directory contains all the library functions provided by UNIX for programmers. The programs written under UNIX make use of these library functions in the lib directory.
/dev: This contains the special files that include terminals, printers and storage devices. These files contain device numbers that identify devices to the operating system, including:
/usr: This contains other accessible directories. Provides /usr/share/man online manual pages
/tmp: This contains all temporary files used by the UNIX system or user.
/etc: This contains system administration and configuration databases.
For example, users details, group users details etc.
/etc/passwd -> in this file you can find the users details
/etc/group -> in this file you can find the group details
/var: This contains the directories of all files that vary among systems. These include files that log system activity, accounting files, mail files that vary from system to system.
/home: This contains the home directories and files of all users. If your logname is user1, your default home directory is
/home/user1.
/root –
/sbin-

Shell
Bourne Shell (sh)
C-Shell (csh)
Korn Shell (ksh)
Bourne Again Shell(BASH)
Extended C-Shell (tcsh)

Bourne Again Shell (bash)
Bash is the shell, or command language interpreter, for the GNU operating system. The name is an acronym for “Bourne-Again Shell”. Bash is largely compatible with sh and incorporates useful features from the Korn shell (ksh) and the C shell (csh). It offers functional improvements over sh for both interactive and programming use.







Wednesday, June 11, 2014

Bug Life Cycle


Introduction:
Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the Application under Test (AUT).
Bug Life Cycle:
In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:

The different states of a bug can be summarized as follows:
1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed
Description of Various Stages:
1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.
2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.
3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.
4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.
5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.
6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.
7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.
8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.
9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.
While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.
Guidelines on deciding the Severity of Bug:
Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects.
A sample guideline for assignment of Priority Levels during the product test phase includes:
1.    Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test.
.
2.    Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc.
.
3.    Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points.
.
4.    Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.
Guidelines on writing Bug Description:
Bug can be expressed as “Result followed by the action”. That means, the unexpected behavior occurring when a particular action takes place can be given as bug description.
1.    Be specific. State the expected behavior which did not occur - such as after pop-up did not appear and the behavior which occurred instead.
2.    Use present tense.
3.    Don’t use unnecessary words.
4.    Don’t add exclamation points. End sentences with a period.
5.    DON’T USE ALL CAPS. Format words in upper and lower case (mixed case).
6.    Mention steps to reproduce the bug compulsorily.


1)  Severity:
It is the extent to which the defect can affect the software. In other words it defines the impact that a given defect has on the system. For example: If an application or web page crashes when a remote link is clicked, in this case clicking the remote link by an user is rare but the impact of  application crashing is severe. So the severity is high but priority is low.
Severity can be of following types:
§  Critical: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable and there is no acceptable alternative method to achieve the required results then the severity will be stated as critical.
§  Major: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable but there exists an acceptable alternative method to achieve the required results then the severity will be stated as major.
§  Moderate: The defect that does not result in the termination, but causes the system to produce incorrect, incomplete or inconsistent results then the severity will be stated as moderate.
§  Minor: The defect that does not result in the termination and does not damage the usability of the system and the desired results can be easily obtained by working around the defects then the severity is stated as minor.
§  Cosmetic: The defect that is related to the enhancement of the system where the changes are related to the look and field of the application then the severity is stated as cosmetic.
2)  Priority:
Priority defines the order in which we should resolve a defect. Should   we fix it now, or can it wait? This priority status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer requirements. For example: If the company name is misspelled in the home page of the website, then the priority is high and severity is low to fix it.
Priority can be of following types:
§  Low: The defect is an irritant which should be repaired, but repair can be deferred until after more serious defect have been fixed.
§  Medium: The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.
§  High: The defect must be resolved as soon as possible because the defect is affecting the application or the product severely. The system cannot be used until the  repair has been done.



Few very important scenarios related to the severity and priority which are asked during the interview:
High Priority & High Severity: An error which occurs on the basic functionality of the application and will not allow the user to use the system. (Eg. A site maintaining the student details, on saving record if it, doesn’t allow to save the record then this is high priority and high severity bug.)
High Priority & Low Severity: The spelling mistakes that happens on the cover page or heading or title of an application.
High Severity & Low Priority: An error which occurs on the functionality of the application (for which there is no workaround) and will not allow the user to use the system but on click of link which is rarely used by the end user.

Low Priority and Low Severity: Any cosmetic or spelling issues which is within a paragraph or in the report (Not on cover page, heading, title).