Overview: The webinar covers the foundations of data compression in the context of the broader subject of information theory. The learner will gain valuable insight into how lossless compression works and why the techniques are robust and trustworthy. In the historical perspective, data compression concepts have arrived in a timely manner to aid the expanding data storage and wireless communication needs of the modern era. The measure of information is defined probabilistically, it is formally defined as entropy, a term that Claude Shannon borrowed from quantum mechanics.
Why should you Attend:The information age continues to yield more and more data. The Internet, Big Data, Cloud Computing, and data storage requirements are measured in ever-increasing scales of Terabytes, Petabytes, Exabytes, and Zettabytes. Does data compression provide a solution to stem this ever-expanding flood? Can you trust data compression? Doesn't it put your data “at risk”? Are some types of data more compressible than other data? How do the methods of lossless data compression (covered) differ from lossy data compression? Take away from this session a meaningful understanding of lossless compression, its limitations, and the tradeoffs between storage reduction and increased processing required to compact and re-expand data.
Areas Covered in the Session:
Why should you Attend:The information age continues to yield more and more data. The Internet, Big Data, Cloud Computing, and data storage requirements are measured in ever-increasing scales of Terabytes, Petabytes, Exabytes, and Zettabytes. Does data compression provide a solution to stem this ever-expanding flood? Can you trust data compression? Doesn't it put your data “at risk”? Are some types of data more compressible than other data? How do the methods of lossless data compression (covered) differ from lossy data compression? Take away from this session a meaningful understanding of lossless compression, its limitations, and the tradeoffs between storage reduction and increased processing required to compact and re-expand data.
Areas Covered in the Session:
- Entropy as the measure of information
- Shannon's Source Coding Theorem
- Huffman Trees and Huffman Coding
- Arithmetic coding
- Dictionary methods
- Transform methods
- Data deduplication
- Implementation considerations, Open Source software, Hardware compression chips
- Performance
is a consulting subject matter expert in the field of Information Theory, knowledgeable about principles of error correction, cryptography, and data compression. He has worked extensively in the field of software defined radio. He has lead development efforts for embedded software and firmware on microcontrollers, digital signal processors, and field programmable gate arrays. He has extensive experience in the verification and validation of systems through all development phases including formal qualification testing. He enjoys profiling software in order to to analyze and better optimize code code performance. Raymond holds a bachelor's degree in engineering from Caltech, masters in applied mathematics from San Diego State University, and a doctorate in computational science from the Claremont Graduate University.
Call our
representative on 1800 447 9407 to have your seats confirmed.
Contact
Information:
Event Coordinator
Event Coordinator
Toll free: 1800 447 9407
Fax: 302 288 6884
EITAGlobal
NetZealous LLC,
161| Mission Falls Lane| Suite 216, Fremont| CA 94539
No comments:
Post a Comment