The power of AI is incredible and learning to make good use of it is in the best interest of both developers and organization.
In this workshop, we will start with a very short discussion of ethics, legality, and the risks of using AI to assist with software development Then we will spend most of our time with hands on exercises. We will take a problem, spend a few minutes collaborating with humans to solve it. Then we will use AI tools to solve it. We will compare notes and learn the effectiveness of both types of collaborations—with humans and with AI.
git client
Your favorite Java IDE—IntelliJ IDEA recommended
One of the recent versions of Java installed locally on your system
One cannot be all things to all people and the same is true of software architecture. There are inherent trade-offs that must be made of any architecture. Some architectural trade-offs are obvious such as performance versus security or availability versus consistency, while others are quite subtle such as resiliency versus affordability. This presentation will cover a number of architectural trade-offs and strategies for dealing with them.
The role of a technical lead or software architect is to design software that realizes the vision of the stakeholders. However, as the design evolves, conflicting requirements emerge that impact the candidate architecture. Resolving these conflicts often involves architectural trade-offs such as granularity vs maintainability. In addition, with time-to-market pressures and having to do more with less, adopting an architectural framework like TOGAF or using a time-consuming process like ATAM is not an option. Thus, it is essential to have a deep understanding of these architectural trade-offs and to use lightweight resolution techniques.
Regardless of the techniques used to make enterprise solutions Highly Available (HA), failure at some point inevitable. Resiliency is how fast a system reacts and then recovers to such failures. Failure of Highly Available (HA) enterprise solutions is inevitable. This presentation covers a number of architectural resiliency techniques and patterns that help increase Mean Time to Failure (MTTF), a.k.a Fault Tolerance and decrease Mean Time to Recovery (MTTR).
Failure of Highly Available (HA) enterprise solutions is inevitable. But in today's highly interconnected global economy uptime is critical. The impact of downtime is magnified when factoring in Service Level Agreement (SLA) penalties and lost revenue. Even more harmful is the damage to an organizations reputation as angry customers vent their frustrations on Social Media.
Resiliency, the neglected stepchild of Availability, is often an afterthought. Yet an enterprise solution without Resiliency is not truly HA. This session will cover a number of architectural resiliency techniques and patterns such as Intelligent Agents, Tolerant Reader, and Circuit Breaker. Applying these techniques to an enterprise solution will increase Mean Time to Failure (MTTF), a.k.a Fault Tolerance and decrease Mean Time to Recovery (MTTR).
Architecture documentation is essential for understanding requirements, guiding design, validating implementation, and sustaining solutions. However, architecture documentation is too often a neglected activity either addressed at inception or relegated to the end of a cycle. And shortly after the documentation becomes stale rendering it useless when a solution must be extended or a production incident requires a resolution. This session provides guidance on creating and maintaining architecture documentation that stays current and relevant.
This session covers a number of practices and activities for generating and maintaining architecture documentation throughout the Software Development Life Cycle (SDLC) and beyond. Topics include using views and viewpoints, creating conceptual or candidate architectures, establishing an architecture review process, and recording architecture decisions in a log.
It's not just architecture—it's evolutionary architecture. But to evolve your architecture, you need to measure it. And how does that work exactly? How does one measure something as abstract as architecture?
In this session we'll discuss various strategies for measuring your architecture. We'll see how you know if your software architecture is working for you, and how to know which metrics to keep an eye on. We'll also see the benefits of measuring your architecture.
We'll cover a range of topics in this session, including
In this session we will discuss what modular monoliths are, what they bring to the table, and how they offer a great middle ground between monoliths and distributed architectures like microservices.
Monoliths get a bad rep. Experienced software developers have seen one too many monoliths devolve into a big ball of mud, leaving everyone frustrated, with an itch to do a “rewrite”. But monoliths have their pros! They are usually simpler, easier to understand, and faster to build and debug.
On the other side of the spectrum you have microservices—that offer scale, both technically and organizationally, as well as having the badge of honor of being “the new cool kid on the block”. But productionizing microservies is HARD.
Why can't we have our cake and eat it too? Turns out, we can. In this session we will explore the modular monolith—all the upsides of a monolith with none of the downsides of distributed architectures. We'll see what it means to build a modular monolith, and how that differs from a traditional layered architecture. We will discuss how we can build architectural governance to ensure our modules remain decoupled. Finally we'll see how our modules can communicate with one another without violating modularity.
By the end of this session you'll walk away with a greater appreciation for the monolith, and see how you can leverage this within your system architecture.
In today’s volatile technology and business climate, big architecture up front is not sustainable. Attempts to define the architectural vision for a system early in the development lifecycle does not work. To accept change, teams are moving to agile methods, but agile methods provide little architectural guidance. In this session, we provide practical guidance for software architecture on agile projects.
We will explore several principles that help us create more flexible and adaptable software systems.We’ll expose the true essence of what’s meant when we say “architectural agility.” And we’ll explore the real goal of software architecture and how we can accommodate architectural change to help increase architectural agility.
Monoliths are out and microservices are in. Not so fast. Many of the benefits attributed uniquely to microservices are actually a byproduct of other architectural paradigms with modularity at their core.
In this session, we’ll look at several of the benefits we expect from today’s architectures and explore these benefits in the context of various modern architectural paradigms. We’ll also examine different technologies that are applying these principles to build the platforms and frameworks we will use going forward.
Along the way, we’ll explore how to refactor a monolithic application using specific modularity patterns and illustrate how an underlying set of principles span several architectural paradigms. The result is an unparalleled degree of architectural agility to move between different architectural paradigms.
As of Java 9, modularity is built in to the Java platform…Finally! Yet few teams are using it. And in reality, you may never use it…at least not for a while. However by understanding the module system, you're guaranteed to see the Java platform in a completely different light. In this session, we explore the default module system, how it works on the Java platform, and what’s in the future for the Java Platform Module System.
We will demonstrate the impact that JPMS will have on our existing applications and identify what we must do to get ready for JPMS. You will also see firsthand how to use the JPMS and the benefits that support for modularity on the Java platform will have on your applications.
Confused about Kubernetes? Don't know what it does? This is the session that will bring clarity for all things Kubernetes.
The orchestration wars are over, and we have a winner, namely Kubernetes. However, it's not easy to wrap your head around all the abstractions that Kubernetes uses.
In this two-part session we'll wade into the water, but quickly navigate to the deep end of Kubernetes. We'll use a demo-driven approach along with analogies and diagrams to bring clarity to understanding this massively complex, yet powerful, ubiquitous tool.
We'll cover the following:
We'll also look into the tradeoffs of using tools like Kubernetes, and when it's appropriate, and if there are any alternatives.
Confused about Kubernetes? Don't know what it does? This is the session that will bring clarity for all things Kubernetes.
The orchestration wars are over, and we have a winner, namely Kubernetes. However, it's not easy to wrap your head around all the abstractions that Kubernetes uses.
In this two-part session we'll wade into the water, but quickly navigate to the deep end of Kubernetes. We'll use a demo-driven approach along with analogies and diagrams to bring clarity to understanding this massively complex, yet powerful, ubiquitous tool.
We'll cover the following:
We'll also look into the tradeoffs of using tools like Kubernetes, and when it's appropriate, and if there are any alternatives.
Containers are everywhere. Of course, a large part of the appeal of containers is the ease with which you can get started. However, productionizing containers is a wholly different beast. From orchestration to scheduling, containers offer significantly different challenges than VMs.
In particular, in terms of security. Securing and hardening VMs is very different than that for containers.
In this two-part session, we will see what securing containers involves.
We'll be covering a wide range of topics, including
Containers are everywhere. Of course, a large part of the appeal of containers is the ease with which you can get started. However, productionizing containers is a wholly different beast. From orchestration to scheduling, containers offer significantly different challenges than VMs.
In particular, in terms of security. Securing and hardening VMs is very different than that for containers.
In this two-part session, we will see what securing containers involves.
We'll be covering a wide range of topics, including
In this session we'll take a tour of some features that you might or might not have heard of, but can significantly improve your workflow and day-to-day interaction with Git.
Git continues to see improvements daily. However, work (and life) can take over, and we often miss the changelog. This means we don't know what changed, and consequently fail to see how we can incorporate those in our usage of Git.
In this session we will look at some features you are probably aware of, but haven't used, alongside new features that Git has brought to the table. Examples include:
By the end of this session, you will walk away with a slew of new tools in your arsenal, and a new perspective on how this can help you and your colleagues get the most out of Git.
The ability to use, troubleshoot, and monitor Kubernetes as an application developer is in high demand. In response, the Cloud Native Computing Foundation (CNCF) developed the Certified Kubernetes Application Developer (CKAD) program to establish credibility and value in the job market. The exam is different from the typical multichoice format of other certifications. It’s completely performance based, under immense time pressure, and requires deep knowledge of the tasks.
This full day hands-on workshop walks you through all the topics covered in the exam to fully prepare you to pass with flying colors. The course is suitable to beginners to Kubernetes. You'll learn when and how to apply Kubernetes concepts to manage an application.
I'll hand out print copies of the book “Certified Kubernetes Application Developer (CKAD) Study Guide: In-Depth Guidance and Practice, 1st Edition” for questions from the audience. I'll also provide a free 30-day trial access to the O'Reilly learning platform to read the 2nd edition of the book.
We'll cover the following topics and more:
minikube start
command before you arrive to the workshop to ensure that dependencies have been downloaded.Really? You may wonder. This is 2024 and do we all not know Java already really well. Most of us do and yet there are so many things we tend to not realize until those things bite us in the back. This is not an introductory session on Java but one in which we will take existing code, discuss its behavior, look into surprises it holds. These are scenarios that either the speaker has stubbed his toes on or things he has been other developers stub their toes, from practical every day realistic code. We will take a number of working examples, ask you to identify the behavior, then run the code to see what it really does, and discuss the underlying concepts or the lessons we need to carry forward to avoid falling into traps or developing code that may cause issues in the future.
This is not an introductory session on Java but one in which we will take existing code, discuss its behavior, look into surprises it holds. These are scenarios that either the speaker has stubbed his toes on or things he has been other developers stub their toes, from practical every day realistic code. We will take a number of working examples, ask you to identify the behavior, then run the code to see what it really does, and discuss the underlying concepts or the lessons we need to carry forward to avoid falling into traps or developing code that may cause issues in the future.
Really? You may wonder. This is 2024 and do we all not know Java already really well. Most of us do and yet there are so many things we tend to not realize until those things bite us in the back.
This is not an introductory session on Java but one in which we will take existing code, discuss its behavior, look into surprises it holds. These are scenarios that either the speaker has stubbed his toes on or things he has been other developers stub their toes, from practical every day realistic code. We will take a number of working examples, ask you to identify the behavior, then run the code to see what it really does, and discuss the underlying concepts or the lessons we need to carry forward to avoid falling into traps or developing code that may cause issues in the future.
It almost feels like we keep hearing this chant “Monoliths are bad, Microservices are awesome.” When architects, technical leads, and developers reject architecture due to bias or favor due to infatuation organizations lose. The most important question is what are the business needs and which architecture is the most suitable for that.
In this presentation we will compare the contrast the differences between monoliths and microservices and the consequences of choosing one over the other. The goal is for us to develop an objective knowledge so we can choose wisely in order to serve our businesses.
Creating Microservices is hard and it takes more effort.
In this presentation, we will distill some design and architectural patterns that can help us with that journey. We can learn from some proven solutions that can help us to avoid common mistakes and also help us gravitate toward focusing on the essential problems that are inherent to the Microservices architecture.
Threads are lightweight, but do not scale well. That's one of the reasons we have been focused on the elastic capabilities on the cloud. Unfortunately that has an impact both on our environment and your companies wallet.
In this presentation we will learn how virtual threads reduce those impacts and help us to create scalable applications with minimum change to code.
Java has come a long way in the recent years.
In this two part presentation, we will learn about the exciting features including modularization, text blocks, records, sealed classes, pattern matching, and also how these features interplay with each other to provide the most flexibility and power to you to create fluent code.
Java has come a long way in the recent years.
In this two part presentation we will learn about the exciting features including modularization, text blocks, records, sealed classes, pattern matching, and also how these features interplay with each other to provide the most flexibility and power to you to create fluent code.
Dividing a large problem into subproblems that are scheduled to run on different threads is an often used solution. We've used executors and fork join pool for such problems in the past. These solutions, in spite of being very powerful, has significant limitations.
In this presentation we will start with those solutions, discuss the issues, and learn how structured concurrency, introduced in Java 21, can help solve such problems more effectively and elegantly.
Integration, once a luxury, is now a necessity. Doing this well, however, continues to be elusive. Early attempts to build better distributed systems such as DCOM, CORBA, and SOAP were widely regarded as failures. Today the focus is on REST, RPC, and graphql style APIs.
Which is best? The go-to answer for architects is, of course, “it depends.”
In this session, we look at the various API approaches, how they attempt to deal with the challenge of decoupling client from server, evolvability, extensibility, adaptability, composability.
The biggest challenge is that needs change over time, and APIs must necessarily evolve. Versioning is challenging, and breaking changes are inevitable. You'll leave this session with a high-level understanding of these approach, their respective trade-offs and ultimately how to align your API approach with your architectural and organizational goals.
This talk will be tailored to Java developers as we delve into the practical applications of AI tools to ease your software development tasks. We'll explore the capabilities of GitHub Copilot used as a plugin for IntelliJ IDEA and VSCode. We'll also play with GPT-4 and examine ways it can help.
It's often said that AI tools will not replace existing developers, but that a developer with those tools will have an advantage over developers without them. Join us as we try to demystify the world of AI for Java developers, equipping you with practical skills to incorporate these tools into your development workflow. Note that this is a rapidly changing field, and the talk will evolve to work with the latest features available.
OpenAI
services, you need to register for a developer key at https://platform.openai.com.Ollama
. The installer is located at https://ollama.com, and is available for macOS, Linux, and Windows.gemma2
and moondream
models. The command to do so is ollama run gemma2
and the same for moondream
. You can also use pull
instead of run
.This talk will be tailored to Java developers as we delve into the practical applications of AI tools to ease your software development tasks. We'll explore the capabilities of GitHub Copilot used as a plugin for IntelliJ IDEA and VSCode. We'll also play with GPT-4 and examine ways it can help.
It's often said that AI tools will not replace existing developers, but that a developer with those tools will have an advantage over developers without them. Join us as we try to demystify the world of AI for Java developers, equipping you with practical skills to incorporate these tools into your development workflow. Note that this is a rapidly changing field, and the talk will evolve to work with the latest features available.
OpenAI
services, you need to register for a developer key at https://platform.openai.com.Ollama
. The installer is located at https://ollama.com, and is available for macOS, Linux, and Windows.gemma2
and moondream
models. The command to do so is ollama run gemma2
and the same for moondream
. You can also use pull
instead of run
.You have been using Git for a while. You know how to stage and commit your work, create and delete branches and collaborate with your team members using remotes. But Git often leaves your confused — ever committed to your work to the wrong branch? Even worse, ever accidentally delete a branch that you needed to keep around? And what is God's good name is “Detached HEAD state”? Why tag commits, when we have branches? Is there a better work-flow than just using merges? What's the difference between a merge and a rebase?
The answer to all of these questions, and more, lies in the constitution of a commit, and the directed acyclic graph (DAG) that Git uses to manage your history. This, right here, is the key to understanding everything in Git.
In this hands-on workshop, we will level up your Git skills. We will foray into the underbelly of Git, and reveal the mystery behind the arcane interface that is the Git CLI.
By the end of this workshop, you will have a keen understanding on how best to use Git, as well as know how to dig yourself any prickly situation you might find yourself in. You will become your team's hero(ine). Most importantly, you will walk away with a keen appreciation of how beautiful and elegant Git really is.
You have been using Git for a while. You know how to stage and commit your work, create and delete branches and collaborate with your team members using remotes. But Git often leaves your confused — ever committed to your work to the wrong branch? Even worse, ever accidentally delete a branch that you needed to keep around? And what is God's good name is “Detached HEAD state”? Why tag commits, when we have branches? Is there a better work-flow than just using merges? What's the difference between a merge and a rebase?
The answer to all of these questions, and more, lies in the constitution of a commit, and the directed acyclic graph (DAG) that Git uses to manage your history. This, right here, is the key to understanding everything in Git.
In this hands-on workshop, we will level up your Git skills. We will foray into the underbelly of Git, and reveal the mystery behind the arcane interface that is the Git CLI.
By the end of this workshop, you will have a keen understanding on how best to use Git, as well as know how to dig yourself any prickly situation you might find yourself in. You will become your team's hero(ine). Most importantly, you will walk away with a keen appreciation of how beautiful and elegant Git really is.
Embarking on the journey to become an architect requires more than technical expertise; it demands a diverse skill set that combines creativity, leadership, communication, and adaptability. You may be awesome as a developer or engineer, but the skills needed to be an architect are often different and require more than technical awareness to succeed.
This presentation delves into the crucial skills aspiring architects need to cultivate. From mastering design principles and embracing cutting-edge technologies to honing collaboration and project management abilities, attendees will gain valuable insights into the multifaceted world of architectural skills. Join us as we explore practical strategies, real-world examples, and actionable tips that pave the way for aspiring architects to thrive in a dynamic and competitive industry.
In today's rapidly evolving technological world, architects play pivotal roles in shaping the success of organizations. This presentation explores the diverse spectrum of architects, ranging from Enterprise Architects and Solution Architects to UX Architects and Security Architects. Delving into their unique responsibilities and expertise, this session sheds light on how these professionals align business objectives with technology, design innovative solutions, and ensure seamless integration. By understanding the multifaceted roles of architects, attendees will gain valuable insights into how these experts drive efficiency, foster innovation, and architect the future of modern enterprises.
This session will outline the importance of Architectural Roles by diving into different roles and responsibilities including the value that each role brings to larger enterprises. The roles that will be covered include:
In addition, We will take time to discuss common challenges faced by Architects.
In the realm of architecture, principles form the bedrock upon which innovative and enduring designs are crafted. This presentation delves into the core architectural principles that guide the creation of structures both functional and aesthetic. Exploring concepts such as balance, proportion, harmony, and sustainability, attendees will gain profound insights into the art and science of architectural design. Through real-world examples and practical applications, this session illuminates the transformative power of adhering to these principles, shaping not only buildings but entire environments. Join us as we unravel the secrets behind architectural mastery and the principles that define architectural brilliance.
Good architectural principles are fundamental guidelines or rules that inform the design and development of software systems, ensuring they are scalable, maintainable, and adaptable. Here are some key architectural principles that are generally considered valuable in software development:
Adhering to these architectural principles can lead to the development of robust, maintainable, and adaptable software systems that meet the needs of users and stakeholders effectively.
In the age of digital transformation, Cloud Architects emerge as architects of the virtual realm, bridging innovation with infrastructure. This presentation offers a comprehensive exploration of the Cloud Architect's pivotal role.
Delving into cloud computing models, architecture design, and best practices, attendees will gain insights into harnessing the power of cloud technologies. From optimizing scalability and ensuring security to enhancing efficiency and reducing costs, this session unravels the strategic decisions and technical expertise that define a Cloud Architect's journey. Join us as we decode the nuances of cloud architecture, illustrating its transformative impact on businesses in the modern era.
In the ever-changing landscape of technology, Solution Architects stand as the linchpin between complex business challenges and innovative technological solutions. This presentation dives deep into the world of Solution Architects, exploring their pivotal role in crafting tailored, efficient, and scalable solutions. From deciphering intricate business requirements to orchestrating seamless integrations, Solution Architects navigate a maze of technologies to deliver outcomes that align perfectly with organizational goals. Join us as we unravel the key responsibilities, skills, and methodologies that empower Solution Architects to transform abstract ideas into tangible, impactful solutions, shaping the future of businesses in a digital age.
In this session we will walk thru the following:
Whether you are an existing Solution Architect looking to hone or validate your skills or you are looking to get into the role of Solution Architect, this session is for you!
What if developers had tools that recorded and helped them explore their historical experiences with the code, and they could identify hotspots of team friction, worthy of discussion, based on empirical data? This talk will explore the possibility and impact of such tools through a design fiction and working prototype of an Augmented Reality (AR) Code Planetarium powered by FlowInsight developer tools.
In an Agile software development process, a software team will typically meet on a regular basis in a “retrospective meeting” to reflect on the challenges faced by the team and opportunities for improvement. On the surface, this challenge might seem straight-forward, but modern software projects are complex endeavors, and developers are human – identifying what’s most important in a complex sociotechnical system is a task humans struggle to do well.
The organization has grown and one line of business has become 2 and then 10. Each line of business is driving technology choices based on their own needs. Who and how do you manage alignment of technology across the entire Enterprise… Enter Enterprise Architecture! We need to stand up a new part of the organization.
This session will define the role of architects and architectures. We will walk through a framework of starting an Enterprise Architecture practice. Discussions will include:
Awareness is the knowledge or perception of a situation or fact, which based on myriad of factors is an elusive attribute. Likely the most significant unasked for skill… perhaps because it's challenging to “measure” or verify. It is challenging to be aware of aware, or is evidence of it's adherence. This session will cover different levels of architectural awareness. How to surface awareness and how you might respond to different technical situations once you are aware.
Within this session we look holistically an engineering, architecture and the software development process. Discussing:
* Awareness of when process needs to change (original purpose of Agile)
* Awareness of architectural complexity
* Awareness of a shift in architectural needs
* Awareness of application portfolio and application categorization
* Awareness of metrics surfacing system challenges
* Awareness of system scale (and what scale means for your application)
* Awareness when architectural rules are changing
* Awareness of motivation for feature requests
* Awareness of solving the right problem
The focus of the session will be mindful (defined as focusing on one's awareness), commentating in sharing strategies for heightening awareness as an architect and engineer.
Looking to go beyond the basics of Go. In my Golang for Java developers we teach the basics of Go while building out different labs to demonstrate an understanding of the concepts.
This session is the next stage of more deeply understanding some of the more advanced or new features in Golang which include:
Looking to go beyond the basics of Go. In my Golang for Java developers we teach the basics of Go while building out different labs to demonstrate an understanding of the concepts.
This session is the next stage of more deeply understanding some of the more advanced or new features in Golang which include:
Software development is an amazing profession, requiring the delicate combination of analytical and creative skills. Understanding architectural patterns, agile best practices, and exploring the depths of platforms, tool, and languages requires deep analytical skills. Yet crafting a system also requires vision and understanding when to deviate from traditional best practices.
In this session, we will explore lessons learned over many years of building large software systems. We will challenge traditional assumptions and explore new ways of thinking.
Why talk about resilience when thinking of scale? It turns out all the effort we put in to achieve great performance may be lost if we are not careful with failures. Failure is not only about unavailability of parts of an application to some users, it may result in overall poor performance for everyone else as well.
In this presentation we will discuss ways to attain scale and discuss how to preserve those efforts by dealing with failures properly.
We have measures for health of a person, a process plant, an aircraft, etc. How do we measure the health of your applications, from the architecture point of view. Fitness functions are intended to provide a health status and can be useful to keep an eye on the system as the architecture evolves.
In this presentation we will look at fitness functions, the different types, present examples, and discuss how they may apply to your own applications.
Design Patterns are common ways to solve problems that developers have discovered over time. They often fill the gaps between the language capabilities and the design goals. When languages mature, sometimes patterns become natural features of languages and blend in to the natural way of writing code instead of a special effort. Java has evolved significantly over the years.
In this session we will revisit some common design problems and see how patterns are realized to solve those problems with the modern capabilities in Java.
Most mainstream languages started out with support for multithreading. Threads were considered lightweight but that term is relative. Threads were not ideal from the point of view of resource utilization and they often lead to higher cost of deployment. There has been a greater emphasis on asynchronous programming in recent times, due to the nature of applications and the architectural patterns they tend to favor.
In this presentation we will discuss how this shift is transforming both the programming languages, the ecosystems, and how we develop applications.
Application Programmer Interfaces (APIs) by definition are directed at software developers. They should, therefore, strive to be useful and easy to use for developers. However, when engaging design elements from the Web, they can be useful in much larger ways than simply serializing states in JSON.
There is no right or perfect API design. There are, however, elements and choices that induce certain properties. This workshop will walk you through various approaches to help you find the developer experience and long-term strategies that work for you, your customers and your organization.
We will cover:
As developers we not only operate in different contexts, but also often have these different contexts interplay as part of our work.
Each of the tools that we use — version control systems like Git (along with collaborative tools like Github/Gitlab), IDE's like Eclipse/IntelliJ, build systems like Gradle, Ci/Cd tooling like Jenkins, IaaC tools like Ansible, the command line — all introduce context.
To be effective developers we need to know when to operate in a certain context, combine or tease apart how these contexts interplay.
Can you improve your release announcements if format your commit messages consistently? You bet!
How should your build tool interact with your version control system?
What does naming your files have to do with how you use your IDE?
This session will take a look at several of these contexts — it will attempt to discern between them, explore when you should separate them and when you attempt to bring them together.
With lots of examples, and lots of quizzes this session will definitely leave you thinking about a few things.
In the ever-evolving landscape of technology and Generative AI, integrating DevOps principles into the machine learning (ML) lifecycle is a transformative game-changer.
Join me for an insightful session where we will explore essential aspects such as mlflow, deployment patterns, and monitoring techniques for ML models. Gain a deeper understanding of how to effectively navigate the complexities of deploying ML models into production environments. Discover best practices and proven strategies for monitoring and observing ML models in real-world scenarios.
By attending this session, you will acquire valuable insights and practical knowledge to overcome the unique hurdles of scaling and bringing AI into production. Unlock the full potential of your ML models by embracing the powerful integration of DevOps principles. This presentation is based on the extensive customer research I conducted to write the Best Seller book - Scaling Machine Learning with Spark - https://www.amazon.com/Scaling-Machine-Learning-Spark-Distributed/dp/1098106822.
In this example-driven session, we'll review several tips and tricks to make the most out of your Spring development experience. You'll see how to apply the best features of Spring and Spring Boot, including the latest and greatest features of Spring Framework 6 and Spring Boot 3.
Spring has been the de facto standard framework for Java development for nearly two decades. Over the years, Spring has continued to evolve and adapt to meet the ever-changing requirements of software development. And for nearly half that time, Spring Boot has carried Spring forward, capturing some of the best Spring patterns as auto-configuration.
As with any framework or language that has this much history and power, there are just as many ways to get it right as there are to get it wrong. How do you know that you are applying Spring in the best way in your application?
You'll need…
In this example-driven session, we'll review several tips and tricks to make the most out of your Spring development experience. You'll see how to apply the best features of Spring and Spring Boot, including the latest and greatest features of Spring Framework 6 and Spring Boot 3.
Spring has been the de facto standard framework for Java development for nearly two decades. Over the years, Spring has continued to evolve and adapt to meet the ever-changing requirements of software development. And for nearly half that time, Spring Boot has carried Spring forward, capturing some of the best Spring patterns as auto-configuration.
As with any framework or language that has this much history and power, there are just as many ways to get it right as there are to get it wrong. How do you know that you are applying Spring in the best way in your application?
You'll need…
In this example-driven session, we're going to look at how to implement GraphQL in Spring. You'll learn how Spring for GraphQL builds upon GraphQL Java, recognize the use-cases that are best suited for GraphQL, and how to build a GraphQL API in Spring.
Typical REST APIs deal in resources. This is fine for many use cases, but it tends to be more rigid and less efficient in others.
For example, in an shopping API, it's important to weigh how much or how little information should be provided in a request for an order resource? Should the order resource contain only order specifics, but no details about the order's line items or the products in those line items? If all relevant details is included in the response, then it's breaking the boundaries of what the resource should offer and is overkill for clients that do not need it. On the other hand, proper factoring of the resource will require that the client make multiple requests to the API to fetch relevant information that they may need.
GraphQL offers a more flexible alternative to REST, setting aside the resource-oriented model and focusing more on what a client needs. Much as how SQL allows for data from multiple tables to be selected and joined in response to a query, GraphQL offers API clients the possibility of tailoring the response to provide all of the information needed and nothing that they do not need.
Introducing Spring Modulith
Although microservices are still a useful architectural choice, the balance of additional complexity and the advantages of microservice architecture do not necessarily work out in the benefit of all applications. While most application will benefit from improved modularity, the challenges that come with distributed computing may be too much for some applications to take on. A well-structured and modular monolithic application might be a better fit.
In this session, we'll explore Spring Modulith, a relatively new Spring library that enables developers to build well-structured Spring Boot applications, guiding them in discovering domain-driven modules, and verifying that the modular arrangement is correct. We'll also see how Spring Modulith assists with modular integration testing and documentation.
Programmers need to perform tasks outside of their development environments. Unfortunately, it seems like many are unaware of some of the super powers that command line tools will afford them with a bit of an effort to learn. This workshop/dojo will be a general survey of the classics as well as newer replacements that will make your life easier once you start to adopt them in your tool belt.
We will cover tools to help with pattern matching, finding things, processing text, working with remote systems, and much more.
Programmers need to perform tasks outside of their development environments. Unfortunately, it seems like many are unaware of some of the super powers that command line tools will afford them with a bit of an effort to learn. This workshop/dojo will be a general survey of the classics as well as newer replacements that will make your life easier once you start to adopt them in your tool belt.
We will cover tools to help with pattern matching, finding things, processing text, working with remote systems, and much more.
Agile has become an overused and overloaded buzzword, let's go back to first principles. Agile is the 12 principles. Agile is founded on fast feedback and embraces change. Agile is about making the right decisions at the right time while constantly learning and growing.
Architecture, on the other hand, seems to be the opposite. Once famously described by Grady Booch as “the stuff that's hard to change” there is overwhelming pressure to get architecture “right” early on as the ultimate necessary rework will be costly at best, and fatal at worst. But too much complexity, too early, can be just as costly or fatal. A truly practical approach to agile architecture is long overdue.
This session introduces a new approach to architecture that enables true agility and unprecedented evolvability in the architectures we design and build. Whether you are a already a seasoned architect, or are simply beginning that path, this session will fundamentally change the way you think about and approach software architecture.
As a PhD student, Arty took on a directed studies to learn how to bring her 3D animated character, fervie, into an interactive context – to run and jump around in an Augmented Reality (AR) environment with a game controller. Join her as she shares her journey in learning, challenges faced, lessons learned, and a demo of her final project, Learning with Fervie.
If you've ever been interested in building applications for AR/VR space, or working at the intersection of art + coding, Arty will be sharing her journey in learning this space over the last few years. With lots of fun examples, and a proposal for a standard platform for building 3D apps for software engineering, you'll learn about the capabilities of what's possible in this space with new inspiring ideas for building fresh and innovative applications.
Event-driven architectures are not new, but they are newly ascendant. For the first time since the client-server revolution of 40 years ago, a new architectural paradigm is changing the way we build systems. Apache Kafka and microservices are at the center of this movement.
In this workshop, we’ll discuss the issues that arise turning a monolith into a set of reactive services, including issues like data contracts, integrating with the systems you can't change, handling request-response interfaces, and more. We'll also discuss common infrastructure choices like Apache Flink and Apache Pinot. Hands-on exercises will focus on understanding your organization's data and forming a plan to refactor that monolith that seems like it will never go away.
I'm looking forward to having you in the Event-Driven Architecture Workshop! To get the best use of our time together, the in-person exercises will focus on understanding the systems you're currently using at work and planning their refactoring to microservices. You should bring a laptop, a pen, and energy for the day!
“By 2030, 80 percent of heritage financial services firms will go out of business, become commoditized, or exist only formally but not competing effectively”, predicts Gartner.
This session explores the integration of AI, specifically ChatGPT, into cloud adoption frameworks to modernize legacy systems. Learn how to leverage AWS Cloud Adoption Framework (CAF) 3.0, Microsoft Cloud Adoption Framework for Azure, and Google Cloud Adoption Framework to build cloud-native architectures that maximize scalability, flexibility, and security. Designed for architects, technical leads, and senior IT professionals, this talk provides actionable insights and strategies for successful digital transformation.
Cloud adoption frameworks are essential for accelerating digital business transformation by leveraging the power of cloud technologies. This talk will guide you through the AWS Cloud Adoption Framework (CAF) 3.0, Microsoft Cloud Adoption Framework for Azure, and Google Cloud Adoption Framework, focusing on building cloud-native architectures that ensure scalability, flexibility, and security.
The session will delve into the strategic role of AI, particularly ChatGPT, in modernizing legacy systems. By understanding and implementing these frameworks, you will learn to navigate the complexities of transitioning from legacy systems to modern cloud-based architectures. This talk will provide practical steps and real-world case studies to help you effectively plan and execute your cloud adoption strategy.
Legacy systems can be assets and obstacles, providing reliable functionality but often becoming burdensome to maintain and evolve. In this talk, we will confront the challenges of working with legacy architectures and discover the strategic approaches for modernization. By examining the benefits and risks of incremental migration versus full system rewrites, attendees will learn the most suitable path for their unique situations.
Through practical examples and case studies, we will explore how successful organizations have revitalized their aging architectures, preserving the value of legacy investments while embracing innovation and adaptability. From small-scale legacy components to large-scale monolithic systems, we'll cover diverse modernization scenarios, allowing participants to glean insights applicable to their projects.
Whether your organization is facing budget constraints, a need for rapid modernization, or concerns about maintaining critical functionality, this talk offers a comprehensive guide to navigating the legacy landscape and crafting a roadmap to rejuvenate aging architectures.
Participants will leave this session equipped with a robust understanding of how to leverage AI, particularly ChatGPT, in the context of legacy system modernization. You will gain strategic insights, practical tools, and actionable knowledge to lead your teams and projects towards successful, AI-enhanced modernization efforts, ensuring your organization remains competitive and agile in a rapidly evolving digital landscape.
This is a dynamic session exploring the integration of cutting-edge AI technologies into software architecture. This talk provides senior developers and architects with actionable insights on leveraging large language models like ChatGPT to enhance design processes, manage architectural tradeoffs, and achieve scalable, innovative solutions.
Overview of the session
Importance of large language models (LLMs) in software architecture
Introduction to ChatGPT and its relevance for software architects
Part 1:
The Role of Large Language Models in Software Architecture
Understanding the capabilities of LLMs like ChatGPT
Benefits of integrating LLMs in modern software development
Real-world examples of AI-enhanced software architecture
Part 2: Prompt Engineering for Architectural Tasks
Crafting effective prompts for ChatGPT
Strategies for creating precise and effective prompts
Examples of architectural prompts and their impact
Interactive Exercise: Participants craft and test their own prompts
Feedback and discussion on prompt effectiveness
Part 3: Optimizing Requirement Analysis with ChatGPT
Leveraging ChatGPT for requirement analysis and design
Integration of AI in empathizing with client needs and journey mapping
Cost estimations, compliance, security, and performance
Case Study: Using empathy map and customer journey map tools in conjunction with AI
Hands-On Exercise: Requirement analysis and design
Part 4: Managing Architectural Tradeoffs
Defining and understanding architectural tradeoffs
Exploring real-world tradeoff scenarios
Case Study 1: Scalability vs. Flexibility
Case Study 2: Time-to-Market vs. Maintainability
Leveraging AI insights to analyze tradeoffs
Group Discussion and Q&A
Part 5: Best Practices for Integrating AI in Software Architecture
Techniques for gathering and prioritizing project requirements
Aligning architectural decisions with business objectives
Evaluating risks and potential outcomes of tradeoffs
Assessing tools, technologies, and architectural patterns
AI-powered decision support with ChatGPT
Collaborative decision-making and involving stakeholders
Part 6: Achieving Sustainable Innovation
Leveraging tradeoffs to drive innovation and creativity
Recap of key points and takeaways
Panel Discussion with Industry Experts
AI in architectural innovation: ChatGPT in action
Q&A and Open Discussion with the Audience
Conclusion
Recapitulation of key takeaways
Addressing final questions and facilitating discussions with the audience
Highlighting the future of AI and big data with technologies like ChatGPT
In this dynamic talk, we explore the fusion of AI, particularly ChatGPT, with data-intensive architectures. The discussion covers the enhancement of big data processing and storage, the integration of AI in distributed data systems like Hadoop and Spark, and the impact of AI on data privacy and security. Emphasizing AI's role in optimizing big data pipelines, the talk includes real-world case studies, culminating in a forward-looking Q&A session on the future of AI in big data.
This talk delves into the innovative integration of advanced AI models like ChatGPT into data-intensive architectures. It begins with an introduction to the significance of big data in modern business and the role of AI in scaling data solutions. The talk then discusses the challenges and strategies in architecting big data processing and storage systems, highlighting how AI models can enhance data processing efficiency.
A significant portion of the talk is dedicated to exploring distributed data systems and frameworks, such as Apache Hadoop and Spark, and how ChatGPT can be utilized within these frameworks for improved parallel data processing and analysis. The discussion also covers the critical aspects of data privacy and security in big data architectures, especially considering the implications of integrating AI technologies like ChatGPT.
The talk further delves into best practices for managing and optimizing big data pipelines, emphasizing the role of AI in automating data workflow, managing data lineage, and optimizing data partitioning techniques. Real-world case studies are presented to illustrate the successful implementation of AI-enhanced data-intensive architectures in various industries.
Introduction (10 mins)
Part 1: Architecting for Big Data Processing and Storage (25 mins)
Part 2: Distributed Data Systems and Frameworks (25 mins)
Part 3: Handling Data Privacy and Security in Big Data Architectures (20 mins)
Part 4: Best Practices for Managing and Optimizing Big Data Pipelines (20 mins)
Case Studies and Real-World Applications (10 mins)
Conclusion and Q&A (10 mins)
Overall, this talk aims to provide a comprehensive understanding of how AI, especially ChatGPT, can be integrated into data-intensive architectures to enhance big data processing, analysis, and management, preparing attendees to harness AI's potential in their big data endeavors.
Key Takeaways:
When the world wide web launched in 1993, it presented a revolutionary new way to globally share information. The revolution didn't stop there. The web soon became a platform for building, hosting, and distributing entire applications. Today most applications are built as web applications yet the core capabilities of HTML remain mired in the Web 1.0 days. Ajax was the first of many “hacks” to build web applications that delivered the rich, responsive user experience that rivaled traditional fat-client applications. Early js libraries and frameworks overcame browser incompatibilities and provided the first abstractions to hide the hacks and today's frameworks are so powerful that conventional wisdom states they are the de-facto best practice for building modern web applications. But at what cost?
We've gone full-circle. Today's SPAs have more in common with the fat client applications of the 90s (albeit with simplified deployment) than they do with the web. The modern UX of today's framework-driven SPAs is what users demand, thus we follow the ever-changing trends; but at what cost? Beyond the bloat, complexity, and ephemerality of the modern webdev toolchain; modern webdev practices have inadvertently abandoned the core ideas of the web that made the platform technologically, architecturally, and philosophically revolutionary.
Leading thinkers in the web development space have long proclaimed that “not everything should be a SPA” however the alternative of a web 1.0 vanilla html application has very limited utility in the year 2024. Are these our only options, or does a “third way” exist?
This session introduces that “third way” based on the revolutionary ideas that empowered the web. A meaningful, practical, and proven alternative to SPA frameworks providing a simpler and more lightweight approach to building applications on the Web and beyond without sacrificing the UX.
Web applications built following this “third way” boast more evolvability, longevity, and simplicity. SPAs will continue to have their place, but good software engineering is about using the right tool for the job. After attending this session, you will have more than just a hammer in your toolbox.
When the world wide web launched in 1993, it presented a revolutionary new way to globally share information. The revolution didn't stop there. The web soon became a platform for building, hosting, and distributing entire applications. Today most applications are built as web applications yet the core capabilities of HTML remain mired in the Web 1.0 days. Ajax was the first of many “hacks” to build web applications that delivered the rich, responsive user experience that rivaled traditional fat-client applications. Early js libraries and frameworks overcame browser incompatibilities and provided the first abstractions to hide the hacks and today's frameworks are so powerful that conventional wisdom states they are the de-facto best practice for building modern web applications. But at what cost?
We've gone full-circle. Today's SPAs have more in common with the fat client applications of the 90s (albeit with simplified deployment) than they do with the web. The modern UX of today's framework-driven SPAs is what users demand, thus we follow the ever-changing trends; but at what cost? Beyond the bloat, complexity, and ephemerality of the modern webdev toolchain; modern webdev practices have inadvertently abandoned the core ideas of the web that made the platform technologically, architecturally, and philosophically revolutionary.
Leading thinkers in the web development space have long proclaimed that “not everything should be a SPA” however the alternative of a web 1.0 vanilla html application has very limited utility in the year 2024. Are these our only options, or does a “third way” exist?
This session introduces that “third way” based on the revolutionary ideas that empowered the web. A meaningful, practical, and proven alternative to SPA frameworks providing a simpler and more lightweight approach to building applications on the Web and beyond without sacrificing the UX.
Web applications built following this “third way” boast more evolvability, longevity, and simplicity. SPAs will continue to have their place, but good software engineering is about using the right tool for the job. After attending this session, you will have more than just a hammer in your toolbox.
In 2017, an organization known as The Semantic Arts published their “data-centric manifesto” leading with this paragraph.
> “We have uncovered a root cause of the messy state of Information Architecture in large institutions and on the web today. It is the prevailing application-centric mindset that gives applications priority over data. The remedy is to flip this on its head. Data is the center of the universe; applications are ephemeral.”
While the vision and ideas of this manifesto are compelling, implementation details are scarce leaving the potential out of reach of many busy developers and architects.
Data-centric is a major departure from the current application-centric approach to systems development and management. Migration to the data-centric approach will not happen by itself.
This session is full of practical and actionable examples, insights, and approaches to this new paradigm. If you’re ready to consider the possibility that systems could be more than an order of magnitude cheaper and more flexible, then check out this session to see firsthand a new way to think about software and information systems.
Among the many lessons the pandemic has taught us, one is that it is possible for many of us to collaborate as a remote or distributed team without jeopardizing the quality of our work. Every day, new tools are emerging in an attempt to address the unique challenges of being remote.
In this session, we will take a look at some of the best tools and strategies for improving communications with your remote team to ensure that you are keeping projects on schedule and maintaining healthy boundaries for work and life.
Over the past few years, the basic idioms and recommended programming styles for Java development have changed. Functional features are now favored, using streams, lambda expressions, and method references. The new six-month release schedule provides the language with new features, like modules and local variable type inference, much more frequently. Even the new license changes in the language seem to complicate installation, usage, and especially deployment.
The purpose of this training course is to help you adapt to the new ways of coding in Java. The latest functional approaches are included, including using parallel streams for concurrency, and when to expect them to be useful. All the new significant features added to the language will be reviewed and evaluated, with the goal understanding what problems they were designed to handle and when they can be used effectively in your code.
The workshop will use Java 21. You can get that from any major vendor, including Oracle. If you don't have a preferred vendor, then https://adoptium.net/ offers pre-built OpenJDK binaries for free.
We'll use IntelliJ IDEA for coding, but nothing in the materials requires any particular IDE. Only the Community edition is necessary, though the instructor will be using the Ultimate edition.
We will also use Gradle as our build tool, but most of the major IDEs can create Gradle-based Java projects without additional installs. You are welcome to use Maven if you prefer, but the instructor may not be able to help if you run into issues.
In the rapidly evolving realm of API development, having the right tools is paramount for both efficient design and effective management. This presentation delves into a curated collection of indispensable tools that every API practitioner should be familiar with. Covering the full spectrum of the API Lifecycle we will talk about tools for design, development, consistency, governance, documentation, and team collaboration.
Join us to discover how this essential toolkit can empower your API journey, enhancing productivity and ensuring optimal performance throughout the API lifecycle.
In an era where digital transformations drive business value, the importance of developing consistent, scalable, and robust APIs cannot be overstated. As teams expand and projects multiply, maintaining a unified API design can pose significant challenges.
This presentation delves into the powerful combination of linting and reusable models as tools to navigate these challenges and ensure consistency across large-scale API designs. We will explore API linting using the open-source Spectral project to enable teams to identify and rectify inconsistencies during design. In tandem we will navigate the need for reusable models, recognizing that the best specification is the one you don't have to write or lint at all! These two approaches not only facilitate the smooth integration of services but also foster collaboration across teams by providing a shared, consistent foundation.
GitHub needs no introduction as the world's premier source code repository. However, over the past several years GitHub has transformed well beyond a great tool for managing source code. It now provides a compelling one-stop-shop of capabilities as part of its platform that enables you to cut loose your disparate jungle of other tooling. Being aware of and learning how to effectively use this Swiss Army Knife of GitHub capabilities can substantially reduce your overall development costs while also reducing your team's cognitive overhead.
Join us for an exciting session where we dive deep into the GitHub toolchain, designed to supercharge developer productivity and unite your teams around a powerful engineering platform. Discover how to optimize pull request lifecycles with protected branch configurations, organizational rulesets, and merge queues. We'll also delve into security vulnerability detection using Dependabot and GitHub Advanced Security Code Scanning workflows that developers will love. Don't miss this opportunity to transform your development process and take your GitHub skills to the next level!
Sharing code and internal libraries across your distributed microservice ecosystem feels like a recipe for disaster! After all, you have always been told and likely witnessed how this type of coupling can add a lot of friction to a world that is built for high velocity. But I'm also willing to bet you have experienced the opposite side effects of dealing with dozens of services that have had the same chunks of code copied and pasted over and over again, and now you need to make a standardized, simple header change to all services across your platform; talk about tedious, frictional, error-prone work that you probably will not do! Using a variety of code-sharing processes and techniques like inner sourcing, module design, automated updates, and service templates, reusing code in your organization can be built as an asset rather than a liability.
In this talk, we will explore the architectural myth in microservices that you should NEVER share any code and explore the dos and don'ts of the types of reuse that you want to achieve through appropriate coupling. We will examine effective reuse patterns, including what a Service Template architecture looks like, while also spending time on the lifecycle of shared code and practically rolling it out to your services. We will finish it off with some considerations and struggles you are likely to run into introducing code reuse patterns into the enterprise.
Domain Driven Design has been one of the major cornerstones in large system design for many years. It has recently been in the zeitgeist as of late, especially when it comes to the terms bounded context and microservices. This full-day class introduces you to Domain Driven Design, and why it is important. We will cover the design and pattern, discuss subdomains, context mapping, tools, and management
Our workshop will not only introduce you to the terms, but we will cover planning, and go through challenges on design so we can discuss the tradeoffs that come with the design. We will also cover some of the more modern patterns that came out of DDD like CQRS, Data Mesh, Rest, and more.
We spend far and away more time reading code than we do writing it, but we spend far more time and money learning how to write it better. We are all familiar with the feeling of looking at a piece of code and having no idea what it does. Trying to debug it is a real chore! What if you could confidently approach unfamiliar code knowing that you can find what you need to in short order?
This session will show you some tools and techniques to be able to read and comprehend unfamiliar code more quickly. We will also go over some debugging techniques to help find bugs more quickly in unfamiliar code.
When trying to track down a bug, the first thing we usually do is jump into the debugger and start stepping through the code. Then we settle in for a long, tedious slog stepping through the code. This session will discuss some great new techniques and tools you can use to streamline your debugging sessions.
We will be borrowing liberally from the new book “Troublshooting Java” by Laurențiu Spilcă.
Refactoring is an essential skill for working with legacy code. Knowing code smells and how to correct them through refactoring is essential for maintaining an existing code base. But what are we refactoring to? We don't want to refactor just for the sake of it. This is where design patterns really shine. When we refactor to a proper design pattern, we are putting together a real solution to the problem rather than just moving code around to make it a little more readable.
This session will show you how to use your full developer toolbox so you can go from code smell to refactoring recipe to design pattern to solution.
Since 1994, the original Gang of Four Design Patterns book, “Design Patterns: Elements of Reusable Object-Oriented Software” has helped developers recognize common patterns in development. The book was originally written in C++, but there have been books that translate the original design patterns into their preferred language. One feature of “The Gang of Four Design Patterns” that has particularly stuck with me has been testability for the most part. With the exception of singleton, all patterns are unit-testable. Design Patterns are also our common developer language. When a developer says, “Let's use the Decorator Pattern,” we know what is meant.
What's new, though, is functional programming, so we will also discuss how these patterns change in our new modern functional programming world. For example, functional currying in place of the builder pattern, using an enum for a singleton, and reconstructing the state pattern using sealed interfaces. We will cover so much more, and I think you will be excited about this topic and putting it into practice on your codebase.
Setup Requirements:
If you do not have the following or don't want to use the following, we can use Github Codespaces or Gitpod.io. Both of these options will have a VSCode instance online
Since 1994, the original Gang of Four Design Patterns book, “Design Patterns: Elements of Reusable Object-Oriented Software” has helped developers recognize common patterns in development. The book was originally written in C++, but there have been books that translate the original design patterns into their preferred language. One feature of “The Gang of Four Design Patterns” that has particularly stuck with me has been testability for the most part. With the exception of singleton, all patterns are unit-testable. Design Patterns are also our common developer language. When a developer says, “Let's use the Decorator Pattern,” we know what is meant.
What's new, though, is functional programming, so we will also discuss how these patterns change in our new modern functional programming world. For example, functional currying in place of the builder pattern, using an enum for a singleton, and reconstructing the state pattern using sealed interfaces. We will cover so much more, and I think you will be excited about this topic and putting it into practice on your codebase.
Setup Requirements:
If you do not have the following or don't want to use the following, we can use Github Codespaces or Gitpod.io. Both of these options will have a VSCode instance online
There is a new way of Threading, which means it is time to prepare. Project Loom has introduced Java Virtual Threads, which is now available in Java 21. Virtual Threads are small Threads meant to perform quick operations with the need to procure long-running OS threads, which can prove expensive. In this presentation, we will learn how to use these threads, what does it mean in relationship with the rest of the Java API, and what does it mean for third-party libraries.
Future
and ReactiveThreading has always been tough. Even with new frameworks that can make it easy, sometimes we don't have them at our disposal. This full-day session focuses on threading and the various synchronizers in Java. We will have material you can use as a reference and challenges that will help you remember some pitfalls to avoid.
volatile
Phaser
CountdownLatch
In this presentation, we will discuss Kafka Connect. Kafka Connect is an open-source project from Confluent. Kafka Connect provides us a way to move data from a data store as a source and stream or batch that information into Kafka. Kafka Connect also gives us a way to take information from Kafka and send that to another data store, a Sink. Every source and sink can be connected to and from various databases and message queues.
What this presentation will entail:
At the end of this presentation, we will have a live demonstration of watching a data pipeline using data stores.
Kafka is more than just a messaging queue with storage. It goes beyond that and with technology from Confluent open source it has become a full-fledged data ETL and data streaming ecosystem.
When we utter the words, Kafka, it is no longer just one component but can be an entire data pipeline ecosystem to transform and enrich data from source to sink. It offers different ways to handle that data as well. In this presentation, we define:
We then discuss KSQLDB. A SQL layer built upon Kafka Streams that provides a simple query language to perform streaming operations
In this session, we will discuss architectural concerns regarding security. How do microservices communicate with one another securely? What are some of the checklist items that you need?
Join us for an in-depth exploration of cutting-edge messaging styles in your large domain.
Here, we will discuss the messaging styles you can use in your business.
Join us for a transformative hands-on workshop on Personal Knowledge Management (PKM), designed specifically to empower developers, architects, and knowledge workers alike to master information in this information age. Based on Tiago Forte's Building a Second Brain methodology and implemented using the Logseq PKM application, this course aims to equip attendees with the strategies, tools, and insights to streamline their knowledge management, increase productivity, and stimulate creativity. Attendees will learn to construct a personal knowledge graph, effectively annotate and reference digital assets, manage tasks, journal for success, leverage templates, and much more. The ultimate goal is to create a personalized system that enables you to instantly find or recall everything you know and learn.
Throughout this full-day, hands-on workshop, you will be guided to apply the concepts and practices learned to build your own personal knowledge graph. By the end of the session, you will have a comprehensive system to manage your knowledge effectively, enabling you to spend less time searching for notes or lost information and more time utilizing what you know and learn.
This workshop isn't just about learning new concepts or tools; it's about transforming your relationship with information and your productivity. The skills and practices you will learn are universally applicable, irrespective of the tools you use. We will show you how these methods work in Logseq, but the principles can be adapted to other platforms as well.
Join us for this transformative journey, and experience a significant shift in how you manage and utilize your knowledge, leading to increased productivity, creativity, and overall well-being in your personal and professional life.
Bring your curiosity, your questions, and your goals. We look forward to seeing you at the workshop!
REST is, undoubtedly one of the most maligned and misunderstood terms in our industry today. So many different things have been called REST, that the world has virtually lost all meaning. Many systems and applications that self-describe as “RESTful” usually are not, at least according to REST as defined in Dr. Roy T. Fielding’s 2000 Dissertation, “Architectural Styles and the Design of Network-based Software Architectures”.
The wild success of the architecture derived by Dr. Fielding led many to want to emulate it (even when it was inappropriate to do so). As a shorthand, organizations began referring to “RESTful” systems, which exposed “RESTful” APIs. Over time “REST” became a buzzword referring to a vague generalization of HTTP/json APIs that typically bear little to no resemblance to the central ideas of REST (and thus elicit few of the benefits). Hypermedia is the central pillar and defining characteristic of the REST architectural style yet it remains almost universally absent.
Hypermedia was a revolutionary idea that, while more relevant than ever, is almost forgotten in today's tech space. Consequently few reap the benefits of this idea and ever fewer know what they might be giving up.
Although not every system needs to (or should be) RESTful, it's helpful to understand the key–and often overlooked–ideas to be able to decide if they make sense for your current next project. This session introduces the key foundational ideas and shows what these ideas look like in practices. Although hypermedia and REST don't make sense for every project or system, you'll leave this session with a better understanding of these groundbreaking ideas, practical insights on how to adopt them today, and ultimately armed to approach the trade-offs of this approach mindfully and deliberately.
The concept of an API is straightforward enough, but the process of turning the individual endpoints into a collection of value-adding organizational resources is not something that gets a lot of attention. In this talk, we will discuss the various individual, team, and organizational choices that impact the development, planning, testing, standardization, operationalization, and evolution of consistent and compatible APIs.
We will cover API:
In this session, we are going to look at 3 different techniques that are remarkably powerful in combination to cut through legacy code without having to go through the bother of reading or understanding it.
The techniques are: Combination Testing: to get 100% test coverage quickly Code Coverage as guidance: to help us make decisions about inputs and deletion Provable Refactorings: to help us change code without having to worry about it. In combination, these 3 techniques can quickly make impossible tasks trivial.
We will be doing this on the Gilded Rose Kata (https://github.com/emilybache/GildedR…. It is extra beneficial if you try it out yourself first so you can see how your implementation would differ but this is not required.
The system crashes in a portion of the codebase you have never seen. It is Friday night, 4 o'clock pm, and you have to fix it before you can go home. How can you accelerate your understanding of the bug and still get out of the office before 5?
Learn to use two simple techniques to isolate the problem by dividing and conquering code without necessarily understanding it. Once the problem has been isolated you will have a suite of tests that replicate the error, and allows you to simply debug to find the problem. After the problem is found, and you have tests replicating the error, you will have everything you need to fix it.
The system crashes in a portion of the codebase you have never seen. It is Friday night, 4 o'clock pm, and you have to fix it before you can go home. How can you accelerate your understanding of the bug and still get out of the office before 5?
Learn to use two simple techniques to isolate the problem by dividing and conquering code without necessarily understanding it. Once the problem has been isolated you will have a suite of tests that replicate the error, and allows you to simply debug to find the problem. After the problem is found, and you have tests replicating the error, you will have everything you need to fix it.
'Times they are a changing'
The programmers of 2024/2025 will look nothing like the programmers of the past. As this transition is in progress I have been actively working to understand what the new world will look like. We're not just coding; we're revolutionizing how we code with AI. Imagine a world where Test-Driven Development (TDD) meets the incredible power of AI tools like ChatGPT and Copilot. That's what we're exploring together!
What's Cooking?
In this session, we'll dive deep into how TDD is not just surviving but thriving with the AI evolution. It's not about the code we write; it's about how we write it, test it, and evolve it with AI's help. I'll share my experiences, the cool tricks I've learned, and how our coding skills transform when AI joins the party
Live Coding Adventure:
“Get ready for an action-packed live coding experience! We'll kick off projects, refactor existing ones, and, most importantly, have fun while learning. I’ll take you through the art of prompt engineering for refactoring and how to structure tests (and comments) to support the best generation of code.
Flexibility and Relevance:
“The tech world moves fast, and so do we. The content of this session is as dynamic as the field of AI itself. So it might be different come July. I'll stay on top of the latest trends and tools, so what you get is fresh, relevant, and ready to apply.”
Learning Outcomes
New Code
Using AI + Tests to generate code
Using AI to generate useful examples
Using AI to understand existing code
Refactoring
Creating full test coverage
Renaming variables and methods
Splitting long methods
Access to LLM ChatGPT is prefered but any will be fine.
An IDE, again any will work but python is prefered
When things get a little bit cheaper, we buy a little bit more of them. When things get cheaper by several orders of magnitude, you don't just see changes in the margins, but fundamental transformations in entire ecosystems. Apache Pinot is a driver of this kind of transformation in the world of real-time analytics.
Pinot is a real-time, distributed, user-facing analytics database. The rich set of indexing strategies makes it a perfect fit for running highly concurrent queries on multi-dimensional data, often with millisecond latency. It has out-of-the box integration with Apache Kafka, S3, Presto, HDFS, and more. And it's so much faster on typical analytics workloads that it is not just a marginally better data warehouse, but the cornerstone of the next revolution in analytics: systems that expose data not just to internal decision makers, but to customers using the system itself. Pinot helps expand the definition of a “decision-maker” not just down the org chart, but out of the organization to everyone who uses the system.
In this talk, you'll learn how Pinot is put together and why it performs the way it does. You'll leave knowing its architecture, how to query it, and why it's a critical infrastructure component in the modern data stack. This is a technology you're likely to need soon, so come to this talk for a jumpstart.
I've witnessed firsthand the challenges and opportunities faced by companies navigating the complex world of legacy systems.
In this session, I'll draw on my years of experience to:
Define the “legacy landscape”: We'll explore the different types of legacy systems, their impact on businesses, and the unique challenges they present for technology teams. We will also distinguish between legacy and technical debt. We will show practical and actionable tools to extract legacy ‘reasoning’ and ‘design’ and transform them into moder landscapes.
Understand the business context: We'll shift perspectives, examining how legacy systems support core business functions and value propositions. Extracting out of a legacy system is essential. And the core capabilities it supports even more so.
Bridge the gap: Technology delivery strategies: We'll delve into practical strategies for delivering technology value within a legacy environment, including:
Modernization techniques: Refactoring, microservices, and API integration.
Legacy coexistence strategies: Leveraging existing investments while adopting new technologies.
Change management considerations: Aligning stakeholders, navigating risk,and ensuring adoption.
Real-world case studies: Learn from real-world examples of companies successfully delivering technology value in legacy environments.
Open discussion: Share your own challenges and experiences, and engage in a collaborative discussion about navigating the legacy landscape.
As CEO of Iasa, the world's largest professional association for architects, and champion of the BTABoK (Business Technology Architecture Body of Knowledge), I've witnessed firsthand the dynamic interplay between engineering and architecture in today's complex projects. In this session, we'll dance on the edge of these disciplines, using the BTABoK as our guiding framework, exploring how agile practices, decisive engineering, efficient delivery, and the delicate balance of hands-on/hands-off leadership intertwine.
Defining the modern dance: We'll explore the shifting ground where agile methodologies meet traditional engineering rigor, and how architects leverage the BTABoK's 5 Pillars of Architecture Skill (Business Strategy, Technology Strategy,Solution Design, Delivery & Operations, and Value Management) to navigate the tension inherent in the roles. We will also dive into the healthy and natural balance between engineereing and architecture and why it is so essential to modern systems success.
Understanding the business waltz: We'll shift perspectives, examining how business objectives and user needs inform the interaction between agility and precision in complex projects, aligning with the BTABoK's emphasis on strategic value delivery.
Change Management and Architecture: We'll look into practical strategies for managing the roadmap and delivery of a project including:
Harmonizing delivery: Finding the sweet spot between iterative sprints and long-term vision, as outlined in the Delivery & Operations pillar.
Hands-on leadership: Empowering teams while providing strategic guidance,aligning with the Leadership and People skills.
Technical Excellence: Ensuring high-quality engineering practices within an agile framework, leveraging the BTABoK's Technology pillar.
Human Systems: Addressing unforeseen challenges and navigating changing requirements, a key competency in the BTABoK's Solution Design pillar.
Case studies from the architectural stage: Learn from real-world examples of successful projects where architects mastered the interaction between agility and engineering excellence, demonstrating the practical application of the BTABoK principles.
Open discussion: Share your own experiences and challenges, and engage in a collaborative conversation about navigating the complexities of modern architecture, applying the BTABoK's collaborative and knowledge-sharing principles.
This session is for you if you are:
A senior developer or technology leader working with legacy systems.
A senior architect looking at a portfolio of change and large structural systems delivery.
Responsible for delivering new technology solutions in a complex, existing environment.
Keen to gain practical strategies and actionable insights from a business and technical perspective.
If you are getting tired of the appearance of new types of databases… too bad. We are increasingly relying on a variety of data storage and retrieval systems for specific purposes. Data does not have a single shape and indexing strategies that work for one are not necessarily good fits for others. So after hierarchical, relational, object, graph, column-oriented, document, temporal, append-only, and everything else, get ready for Vector Databases to assist in the systematization of machine learning systems.
This will be an overview of the benefits of vectors databases as well as an introduction to the major players.
We will focus on open source versus commercial players, hosted versus local deployments, and the attempts to add vector search capabilities to existing storage systems.
We will cover:
If you are getting tired of the appearance of new types of databases… too bad. We are increasingly relying on a variety of data storage and retrieval systems for specific purposes. Data does not have a single shape and indexing strategies that work for one are not necessarily good fits for others. So after hierarchical, relational, object, graph, column-oriented, document, temporal, append-only, and everything else, get ready for Vector Databases to assist in the systematization of machine learning systems.
This will be an overview of the benefits of vectors databases as well as an introduction to the major players.
We will focus on open source versus commercial players, hosted versus local deployments, and the attempts to add vector search capabilities to existing storage systems.
We will cover:
There is plenty of discussion about how machine learning will be applied to cybersecurity initiatives, but there is precious little conversation about the actual vulnerabilities of these systems themselves. Fortunately, there are a handful of research groups doing the work to assess the threats we face in systematizing data-driven systems. In this session, I will introduce to the main concerns and how you can start to think about protecting against them.
We will mostly focus on the research findings of the Berryville Institute of Machine Learning. They have conducted a survey of the literature and have identified a taxonomy of the most common kinds of attacks including:
This will be a security-focused discussion. Only basic understanding of machine learning will be required.
Jamie Zawinski once said “Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.“. Many consider regular expressions to be indecipherable, but the truth is that every programmer should consider regular expressions an integral part of their toolkit. From the command line to your favorite text editor, from parsing user input to scraping HTML pages - once you know regular expressions you will find a use for them in almost every programming context.
In this highly interactive workshop we will decipher the cryptic construct that is a Regular Expression. Starting with the basics, we will work our way towards advanced usage, including anchors, modifiers, groups, and look arounds.
This is a HIGHLY interactive workshop — Not only will we have a lot of exercises, we will use a playground that will allow us to experiment to our heart's content! Feel free to come in with issues you may have seen at work!
Agenda:
Let's once and for all make sense of this powerful tool.
I hope to see you all there.
Jamie Zawinski once said “Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.“. Many consider regular expressions to be indecipherable, but the truth is that every programmer should consider regular expressions an integral part of their toolkit. From the command line to your favorite text editor, from parsing user input to scraping HTML pages - once you know regular expressions you will find a use for them in almost every programming context.
In this highly interactive workshop we will decipher the cryptic construct that is a Regular Expression. Starting with the basics, we will work our way towards advanced usage, including anchors, modifiers, groups, and look arounds.
This is a HIGHLY interactive workshop — Not only will we have a lot of exercises, we will use a playground that will allow us to experiment to our heart's content! Feel free to come in with issues you may have seen at work!
Agenda:
Let's once and for all make sense of this powerful tool.
I hope to see you all there.
Kafka is a “must know.” It is the data backplane of the modern microservice architecture. It's now being used as the first persistence layer of microservices and for most data aggregation jobs. As such, Kafka has become an essential product in the microservice and big data world.
This workshop is about getting started with Kafka. We will discuss what it is. What are the components, we will discuss the CLI tools, and how to program a Producer and Consumer.
Kafka is a “must know.” It is the data backplane of the modern microservice architecture. It's now being used as the first persistence layer of microservices and for most data aggregation jobs. As such, Kafka has become an essential product in the microservice and big data world.
This workshop is about getting started with Kafka. We will discuss what it is. What are the components, we will discuss the CLI tools, and how to program a Producer and Consumer.
When open-source projects don’t have good documentation, most people don’t use them but this is not an option when you have to use the code developed with other teams within your own company. Then, you have to use it and the lack of documentation can lead to frustration and even desperation. Why doesn’t good internal documentation exist? The problem is threefold:
If you understand everything it is hard to know what is confusing to others
If you don’t know things, you know what you wish existed but you can’t write it yourself
Even if you have both people present, it is still hard to write good documentation
In this session, I will show you how to solve all three problems.
'Times they are a changing'
The programmers of 2024/2025 will look nothing like the programmers of the past. As this transition is in progress I have been actively working to understand what the new world will look like. We're not just coding; we're revolutionizing how we code with AI. Imagine a world where Test-Driven Development (TDD) meets the incredible power of AI tools like ChatGPT and Copilot. That's what we're exploring together!
What's Cooking?
In this session, we'll dive deep into how TDD is not just surviving but thriving with the AI evolution. It's not about the code we write; it's about how we write it, test it, and evolve it with AI's help. I'll share my experiences, the cool tricks I've learned, and how our coding skills transform when AI joins the party
Live Coding Adventure:
“Get ready for an action-packed live coding experience! We'll kick off projects, refactor existing ones, and, most importantly, have fun while learning. I’ll take you through the art of prompt engineering for refactoring and how to structure tests (and comments) to support the best generation of code.
Flexibility and Relevance:
“The tech world moves fast, and so do we. The content of this session is as dynamic as the field of AI itself. So it might be different come July. I'll stay on top of the latest trends and tools, so what you get is fresh, relevant, and ready to apply.”
Learning Outcomes
New Code
Using AI + Tests to generate code
Using AI to generate useful examples
Using AI to understand existing code
Refactoring
Creating full test coverage
Renaming variables and methods
Splitting long methods
Access to LLM ChatGPT is prefered but any will be fine.
An IDE, again any will work but python is prefered
If you think pairing programming ( 2 people on 1 computer ) is crazy, hold onto your hats; it’s time for Mob Programming.
Mob Programming: All the brilliant people working on the same thing, at the same time, in the same place, and on the same computer.
We are going to take a look at a new way of working, what it looks like, and why it can work. More importantly, we’ll have a (very) short session of actual mobbing, so you can see for yourself and come to your own conclusions.
So you've embraced Apache Kafka as the core of your data infrastructure, embracing event-driven services that communicate with each through topics, integrate with legacy systems through an ecosystem of connectors, and respond more or less in real time to things that happen in the world outside your software. Logs of immutable events form a more robust backbone than the one-database-to-rule-them-all of your deep monolith past. Your stack is more evolvable, more responsive, and easier to reason about. There's just one problem: now that everything is a stream, how do you query things?
If this is you, you can probably name at least one or two ways off the top of your head, but have you stopped to think through how to make the choice? It is time you did.
In this talk, we'll explore the solutions currently in use in the world for asking questions about the contents of a topic, including Kafka Streams, the various streaming SQL implementations, your favorite relational database, your favorite data lake, and real-time analytics databases like Apache Pinot. There is no single correct answer to the question, so as responsible builders of systems, we must understand our options and the tradeoffs they present to us. You'll leave this talk even happier that you've embraced Kafka as the heart of your system, and ready to deploy the right choice for querying the logs that hold your data.
Technologists tend to think of end-to-end as meaning the UI to the database from client to server. But a true architect realizes end to end means from idea to retirement. The architect has to traverse a lot of territory in that journey, from business to technology. They have to work through change management, complex stakeholder dynamics, systems of systems and of course technical decisions.
This introduction to the end-to-end architect will prepare you with the 5 pillars of architecture as well as help you understand how to navigate the complexity of end-to-end architecture work!
Domain Driven Design has been guiding large development projects since 2003, when the seminal book by Eric Evans came out. Domain Driven Design is split up into two parts: Strategic and Tactical. One of the issues is that the Strategic part becomes so involved and intense that we lose focus on implementing these sorts of things. This presentation swaps this focus as topic pairs. For example, when we create a bounded context, is that a microservice or part of the subdomain? When we create a domain event, what does that eventually become? How do other tactical patterns fit into what we decide in the strategic phase?
In this presentation, we will break it down into pairs of topics.
Game of Life is an intriguing game. At first look it looks simple, but as you look closer, it appears to be quite complex. How can we implement this game with different constraints, what are the constraints? Is it possible to use functional programming for this, to honor immutability? You see, it is intriguing.
We will discuss the constraints, think about how we may be able to solve them, and along the way discover how functional programming can play a role. We will have a fully working program, using live coding, at the end of this session, to illustrate some nice ideas that will emerge from our discussions.
Join us for an immersive journey into the heart of modern cybersecurity challenges. In this groundbreaking talk, we delve into the intricacies of securing your digital assets with a focus on three critical domains: applications, APIs, and Large Language Models (LLMs).
As developers and architects, you understand the paramount importance of safeguarding your systems against evolving threats. Our session offers an exclusive opportunity to explore the industry-standard OWASP Top 10 vulnerabilities tailored specifically to your domain.
Uncover the vulnerabilities lurking within your applications, APIs, and LLMs, and gain invaluable insights into mitigating risks and fortifying your defenses. Through live demonstrations and real-world examples, you'll witness firsthand the impact of security breaches and learn proactive strategies to combat them.
Whether you're a seasoned architect seeking to fortify your organization's security posture or a developer striving to build resilient systems, this talk equips you with the knowledge and tools essential for navigating the complex landscape of cybersecurity.
Agenda
OWASP Top 10 Overview
OWASP Top 10 for Application Security
OWASP Top 10 for API Security
OWASP Top 10 for LLM Applications (Large Language Models)
Q&A and Discussion
Conclusion
This talk explores how cutting-edge technologies and trends will shape the future of enterprise software development, creating opportunities for innovation and efficiency. We’ll discuss how to leverage these technologies within an Enterprise Architecture framework to build a robust roadmap that guides enterprises through technological advancements and competitive landscapes.
Introduction
Generative AI, Graph Databases, and Vector Databases
GPTs and Copilot GPTs
Augmented Reality (AR) & Virtual Reality (VR)
Edge Computing
Artificial Intelligence & Machine Learning (AI/ML)
Blockchain Technology
Agents and Advanced Automation
Preparing for the Future
“What you must learn is that these rules are no different than the rules of a computer system. Some of them can be bent. Others can be broken. "
-Morpheus
The world of technology seems logical, objective, and to operate by consistent rules. Reality can be illogical, subjective, and even random at times. Yet we accept subjective reality as-is; our single perspective is all we can really know.
In the end, reality constrains engineers and magicians reshape reality. Perhaps reality is not what it seems.
Join Michael Carducci, magician and software architect, as he takes you on a journey through the marriage of the logical and the illogical, the intersection of magic and technology. Discover what each has to teach the other, and how you can apply the lessons to transform your skills and your career.
With over 25 years of experience in both fields–and a lifetime of successes and failures–Michael shares his deeply reflective, unique, and authentically honest perspective on both careers, dealing with problems, challenges, wins, and losses.
This talk combines illusion, engineering wisdom, life lessons, and the stories that connect them. You'll be astonished, engaged, and leave with an entirely new perspective on technology and life.
GitHub Copilot is a generative AI tool for coding that assists developer in writing code more efficiently and faster. This full-day course will help you gain a comprehensive understanding of the tool's capabilities and how to use it effectively in your day-to-day coding.
In this full-day class, we'll cover the basics of Copilot and provide you with hands-on experience through labs. You'll learn the what, why, and how of Copilot and see how to leverage its generative AI functionality in daily coding tasks across multiple languages. You'll also learn key techniques and best practices for working with Copilot.
IMPORTANT NOTE: In order to do the labs for this course, you must have a GitHub Copilot subscription. If you do not, you can log into GitHub, then go to https://github.com/settings/copilot and sign up (start free trial) before the course.
Attendees will need a GitHub account, a browser, a subscription to the free Copilot trial, and the ability to run GitHub Codespaces if you want to use our pre-configured environment.
IMPORTANT NOTE: In order to do the labs for this course, you must have a GitHub Copilot subscription. If you do not, you can log into GitHub, then go to https://github.com/settings/copilot and sign up (start free trial) before the course.
Also, we will provide a way to run the environment for this and all tools using GitHub Codespaces. As long as you can use GitHub Codespaces, you should be all set. These are virtual environments that run on systems provided by GitHub/Azure and all functions can be accessed through the browser.
If you intend to use a corporate GitHub account, please make sure in advance that you can startup and run a GitHub Codespace with that account. You can log into GitHub and go to https://github.com/codespaces and start a new codespace from there to try this out. If you are not allowed via your company's policies to run Codespaces in GitHub, you will need to create/use a personal GitHub account. This will be using the public GitHub site github.com.
You will need a GitHub account and a browser. The use of Chrome is recommended since it seems to work better with copy/paste functionality within Codespaces.
If you prefer, you can install Copilot on another environment/IDE. But we will not be providing directions and support for that in the workshop. Some operations may not work on your local system depending on the setup. The labs will be setup for the custom Codespace environment.
Have you ever asked an AI language model like ChatGPT about the latest developments on a certain topic, only to receive this response:” I'm sorry, but as of my last knowledge update in January 2022, I don't have information on the topic at hand..“If you have, you've encountered a fundamental limitation of large language models. You can think about these models as time capsules of knowledge frozen at the point of their last training. They can only learn new information by going through a retraining process, which is both time-consuming and computationally intensive.
In the fast-paced world of artificial intelligence, a new technology is emerging to tackle this challenge — Retrieval-Augmented Generation, or RAG. This innovative approach is revolutionizing how language models operate, breaking down barriers and opening up new possibilities.But what exactly is RAG? Why is it important? And how does it work? All and more in this talk.
Designing a distributed system architecture can be a daunting task, with contradictory requirements and constraints constantly at play. The CAP theorem that directly states the challenges in distributed data stores presents a classic example where developers must choose between consistency, availability, and partition tolerance. The same applies to streaming infrastructure systems, where optimizing for one aspect can come at the cost of another. With cost, throughput, accuracy, and latency as the main constraints for streaming systems, it's crucial to make informed decisions that align with your business goals.
In this session, you'll gain valuable insights into how your system design choices impact your system overall capabilities. You'll also learn about the differences between Flink Streaming and Spark Streaming, both conceptually and in practice. Lastly, you'll understand how combining multiple solutions can be beneficial for your team and business. Join to learn more about the cumbersome world of distributed stream processing systems.
Web Components allow developers to create reusable components without a framework.
During this talk we’ll learn about Custom Elements, Template, and Shadow Dom specifications with code examples and different tools like Angular to help you utilize these new APIs. We’ll also cover an example custom element that Comcast is using across all of its sites for millions of users. We’ll also demo off component libraries and show how easy they are to integrate into existing sites.
Performance is the number one feature for Progressive web apps to compete with Native Apps. To remove jank from the experience, the Chrome dev tools provide some excellent insight into the root cause.
Let's explore how to find issues in your app and keep your PWAs feeling Native.
Java advances quickly. It is incredible how much incremental change accumulates over time. JDK 17 is now three years old, and we are at JDK 22 as of 2024. In this session, I will take some select JEPs (Java Enhancement Process) and demonstrate what they are and their use case. Then you can be ahead of the curve, and give you all the information you need to sell and demand the next generation Java for your work or opensource initiative.
super()
switch
expressionsCloud usage has been soaring over the last few years, and now developers are starting to get pressure to reduce cloud spend. In this session, we’ll discuss how to optimize your cloud utilization, and hence how much your team spends, on cloud infrastructure.
We’ll discuss these topics with a specific focus on Java applications:
Architecture of your application
PaaS, CaaS, Cloud Functions or Kube?
JVM ramp-up & optimization time
Headroom for variable load
Over provision or elastic compute?
Security is a fundamental concern and requirement in all aspects of software development today. And GitHub is the industry-leading collaboration platform for software development. So, it’s crucial that anyone working with/in GitHub understands how to use it securely. In this session, author and trainer Brent Laster will provide a brief overview of the key GitHub security features available to you for free and through GitHub Advanced Security.
This session will cover authentication via keys and tokens, guarding your branches and tags with protection rules and rulesets, code scanning, and secret scanning. We’ll also touch on security logging, creating security policies, and security alerts.
Software is now 80% open source and third-party and 20% proprietary code that stitches it together into business-critical applications. In these large and diversely composed codebases, dependencies change frequently at their own pace and security vulnerabilities can be introduced at any time by anyone. Not updating software regularly leads to critical bugs, performance, and security issues (plus your code can just get harder to work with!).
Mass code refactoring in these massive codebases is a multi-point operation that requires accuracy and consistency. It’s about affecting change across many individual cursor positions in thousands of repositories representing tens or hundreds of millions of lines of code. Whether you’re migrating frameworks or guarding against vulnerabilities, this requires coordination, tracking, and accuracy. This is not a problem AI can solve alone. AI, like many humans, is not good at math and programming. AI needs a computer just like a human does.
In this talk, we’ll discuss automated code remediation with the deterministic OpenRewrite refactoring engine, a technology born at Netflix in 2016. It’s built on manipulating the Lossless Semantic Tree (LST) representation of code with recipes (programs) that result in 100% accurate style-preserving code transformations. It is a rule-based, authoritative system. Then we’ll show how to couple the precision of a rules-based system with the power of AI. We'll demonstrate a generative AI procedure that samples source code to identify defects and uses OpenRewrite to fix them. This is a general purpose pattern you're going to start seeing a lot of — “ChatGPT gets a computer” (with OpenRewrite as the computer in this case).
How adaptable is your technology stack to changes in business requirements, technological advancements, and the availability of new and better tools (and avoiding vendor lock-in!)?
When you can more easily secure, upgrade, move, or modernize your code, that means you can adapt quickly and efficiently to changes in technology and markets. That’s what Migration Engineering is all about, which we’ll be exploring in this workshop.
We’ll discuss and demonstrate how to write custom recipes using OpenRewrite, an open source auto-refactoring tool, to study and analyze your code before planning migration and modernization efforts—and then automate code changes for your teams.
You’ll also learn how to write recipes that will automate code search and transformation actions that are custom to your organization. We will assemble these recipes with the visitor pattern, show how to stitch recipes together with YAML, with Refaster-style templates, with Semgrep matchers, etc. This is a comprehensive look at all kinds of recipe development.
You will come away fully equipped to plan and exercise large scale code transformations in your organization.
OpenRewrite Outline
Are you behind migrating from Spring Boot 2 to 3 or Java 8 to 11 to 17 to 21? Are you stuck on an old library or tool when you really want to switch? Are you feeling buried under the weight of CVE patches and static analysis issues? If you can't keep up, your codebase becomes extremely difficult and expensive to maintain and evolve. Code breaks. Innovation dies.
The rise of the 100x developer is here, and it's fueled by automated code maintenance, migration, and security at scale. In this keynote, we're going to cover how development teams can achieve the ultimate in productivity and efficiency when not held back by old, stagnant code. Learn about the largest Java auto-refactoring ecosystem, the wealth of open-source code analysis and transformation recipes available, and how you can put it all to work for you today and every day.
We'll be showing off some of the latest advancements in OpenRewrite open-source refactoring tech, sharing its integration in some well-known dev tools like IntelliJ IDEA, as well as how to break out of single-repo refactoring and leverage AI to make a big impact across your codebase.
Embark on an exciting journey into the intersection of psychology, software development, and Developer Productivity Engineering (DPE) as we explore how Nobel laureate Daniel Kahneman's groundbreaking work on the psychology of judgment and decision-making can optimize your development practices while conserving cognitive resources. Discover the impact of System 1 and System 2 thinking on the software development process, and learn to strike the perfect balance between “fast” intuitive thinking and “slow” deliberative reasoning.
We'll delve into practical DPE strategies for reducing mental fatigue and minimizing context switches, focusing on techniques like build system performance optimization, test parallelization, AI-powered test selection, and developer productivity observability. By leveraging these best practices, you'll enhance your and your team's productivity and maintain focus on high-value tasks.
Whether you're an experienced developer seeking to boost your performance or a curious newcomer eager to learn about the connections between the human mind, software development, and DPE, this talk is for you. Join us for an exhilarating adventure into the fast and slow worlds of coding, and uncover new ways to maximize your cognitive resources.
If only it were so easy! Leadership is a thing into which many find themselves thrown, and to which many others aspire—and it is a thing which every human system needs to thrive. Leading teams in technology organizations is not radically different from any other kind of organization, but does tend to present a common set of patterns and challenges. In this session, I’ll examine them, and provide a template for your own growth as a leader.
We’ll cover the following:
The relationship between leadership, management, and vision
Common decision-making pathologies and ways to avoid them
Strategies for communication with a diverse team
The basics of people management
How to conduct meetings
How to set and measure goals
How to tell whether this is a vocation to pursue
No, you will not master leadership in this short session, but we will cover some helpful material that will move you forward.
How adaptable is your technology stack to changes in business requirements, technological advancements, and the availability of new and better tools (and avoiding vendor lock-in!)?
When you can more easily secure, upgrade, move, or modernize your code, that means you can adapt quickly and efficiently to changes in technology and markets. That’s what Migration Engineering is all about, which we’ll be exploring in this workshop.
We’ll discuss and demonstrate how to write custom recipes using OpenRewrite, an open source auto-refactoring tool, to study and analyze your code before planning migration and modernization efforts—and then automate code changes for your teams.
You’ll also learn how to write recipes that will automate code search and transformation actions that are custom to your organization. We will assemble these recipes with the visitor pattern, show how to stitch recipes together with YAML, with Refaster-style templates, with Semgrep matchers, etc. This is a comprehensive look at all kinds of recipe development.
You will come away fully equipped to plan and exercise large scale code transformations in your organization.
Outline:
In these perplexing times, when jobs vanish like ice cream on a hot summer day, tech professionals must find the sweet spot between concern for their affected colleagues and the pressing need to scoop up progress.
Discover Developer Productivity Engineering (DPE), the secret sauce that helps both individuals and companies navigate the labyrinth of layoffs and limited resources with flair. Join us as we trek through a landscape teeming with dispirited engineers and floundering companies, showcasing DPE's power to transform even the most disenchanted developer into a productivity maestro.
By adopting DPE's clever approach, we can face economic instability with unity and a dash of wit, turning sour lemons into a refreshing lemonade of success.
In this presentation, we'll cover the options, tips, and tricks for using GitHub Copilot to help us identify how to test code, generate tests for existing code, and generate tests before the code.
Join global trainer, speaker, and author of the upcoming book, Learning GitHub Copilot, Brent Laster as he presents material on multiple ways to leverage Copilot for testing your code on any platform and framework.
Have you wondered what options GitHub Copilot can provide for helping to not only write your code, but test your code? In this session, we'll examine some key ways that Copilot can support you in ensuring you have the basic testing needs covered. In particular, we'll cover:
Updated! LM Studio is an easy to use desktop app for experimenting with local and open-source Large Language Models (LLMs) by hosting them on your own system.
Here are some of the features it provides (quoted from its homepage):
Run LLMs on your laptop, entirely offline
Use models through the in-app Chat UI or an OpenAI compatible local server
Download any compatible model files from HuggingFace repositories
Discover new & noteworthy LLMs in the app's home page
Hugging Face is a community hub focused on creating and sharing AI models. It provides many free and pre-trained models as well as datasets and tools to use with them.
Ollama is a command line tool for downloading, exploring, and using LLMs on your local system.
In this hands-on workshop, we'll cover the basics of getting up and running with LM Studio, Ollama and give you hands-on labs where you can use them and Hugging Face to find and load and run LLMs, interact with it via Chat and Python code and more!
Join author, trainer and speaker Brent Laster to learn about LM Studio, Hugging Face, Ollama, and Streamlit and how to use them to find and use Large Language Models hosted and running in your own environment. Get hands-on experience with the applications and learn how to DIY your own Gen AI!
Agenda:
Section 1: Introduction
In this section, we'll talk about what LLMs are, learn about basic use of LM Studio to find models and also start to look at huggingface.co.
Lab 1 - Lab 1 - Getting familiar with LM Studio and models
Purpose: In this lab, we’ll start to learn about models through working with one in LM Studio.
Section 2: Chatting with LLMs and using their APIs
In this section, we'll learn about how we can chat with an LLM, the different roles involved in chatting, and how to also use API calls from the command line to interact with models.
Lab 2 - Chatting with our model
Purpose: In this lab, we'll see how to load and interact with the model through chat and terminal.
Section 3 - Programming for local models
In this section, we'll look at how to create some Python code to interact with LM Studio with its lms interface and lmstudio.js library for JavaScript and Typescript.
Lab 3 - Coding to LM Studio
Purpose: In this lab, we'll see how to do some simple Python and JavaScript code to interact with the model.
Section 4 - Leverage HuggingFace.co
In this section, we'll look more into the model details and tools for using models that Hugging Face offers, including its transformers library and pipelines.
Lab 4 - Working with models in Hugging Face
Purpose: In this lab, we’ll see how to get more information about, and work directly with, models in Hugging Face.
Section 5 - Using Ollama
In this section, we'll learn about how we can use the standalone tool Ollama to get and run LLMs. We'll also talk about multimodal models.
Lab 5 - Using Ollama to run models locally
Purpose: In this lab, we’ll start getting familiar with Ollama, another way to run models locally.
Section 6 - Creating simple UIs for GenAI with Streamlit
In this section we'll work with a graphical Python library, Streamlit to see how to quickly and easily create interactive interfaces like chatbots to use with our local LLMs.
Lab 6 - Building a chatbot with Streamlit
Purpose: In this lab, we'll see how to use the Streamlit application to create a simple chatbot with Ollama.
We will provide a way to run the environment for this and all tools using GitHub Codespaces. As long as you can use GitHub Codespaces, you should be all set. These are virtual environments that run on systems provided by GitHub/Azure and all functions can be accessed through the browser.
If you intend to use a corporate GitHub account, please make sure in advance that you can startup and run a GitHub Codespace with that account. You can log into GitHub and go to https://github.com/codespaces and start a new codespace from there to try this out. If you are not allowed via your company's policies to run Codespaces in GitHub, you will need to create/use a personal GitHub account. This will be using the public GitHub site github.com.
You will need a GitHub account and a browser. The use of Chrome is recommended since it seems to work better with copy/paste functionality within Codespaces.
If you prefer, you can install LM Studio and the other apps on your personal system, but we will not be providing directions and support for that in the workshop. Some operations may not work on your local system depending on the setup. The labs will be setup for the custom Codespace environment.
Updated! LM Studio is an easy to use desktop app for experimenting with local and open-source Large Language Models (LLMs) by hosting them on your own system.
Here are some of the features it provides (quoted from its homepage):
Run LLMs on your laptop, entirely offline
Use models through the in-app Chat UI or an OpenAI compatible local server
Download any compatible model files from HuggingFace repositories
Discover new & noteworthy LLMs in the app's home page
Hugging Face is a community hub focused on creating and sharing AI models. It provides many free and pre-trained models as well as datasets and tools to use with them.
Ollama is a command line tool for downloading, exploring, and using LLMs on your local system.
In this hands-on workshop, we'll cover the basics of getting up and running with LM Studio, Ollama and give you hands-on labs where you can use them and Hugging Face to find and load and run LLMs, interact with it via Chat and Python code and more!
Join author, trainer and speaker Brent Laster to learn about LM Studio, Hugging Face, Ollama, and Streamlit and how to use them to find and use Large Language Models hosted and running in your own environment. Get hands-on experience with the applications and learn how to DIY your own Gen AI!
Agenda:
Section 1: Introduction
In this section, we'll talk about what LLMs are, learn about basic use of LM Studio to find models and also start to look at huggingface.co.
Lab 1 - Lab 1 - Getting familiar with LM Studio and models
Purpose: In this lab, we’ll start to learn about models through working with one in LM Studio.
Section 2: Chatting with LLMs and using their APIs
In this section, we'll learn about how we can chat with an LLM, the different roles involved in chatting, and how to also use API calls from the command line to interact with models.
Lab 2 - Chatting with our model
Purpose: In this lab, we'll see how to load and interact with the model through chat and terminal.
Section 3 - Programming for local models
In this section, we'll look at how to create some Python code to interact with LM Studio with its lms interface and lmstudio.js library for JavaScript and Typescript.
Lab 3 - Coding to LM Studio
Purpose: In this lab, we'll see how to do some simple Python and JavaScript code to interact with the model.
Section 4 - Leverage HuggingFace.co
In this section, we'll look more into the model details and tools for using models that Hugging Face offers, including its transformers library and pipelines.
Lab 4 - Working with models in Hugging Face
Purpose: In this lab, we’ll see how to get more information about, and work directly with, models in Hugging Face.
Section 5 - Using Ollama
In this section, we'll learn about how we can use the standalone tool Ollama to get and run LLMs. We'll also talk about multimodal models.
Lab 5 - Using Ollama to run models locally
Purpose: In this lab, we’ll start getting familiar with Ollama, another way to run models locally.
Section 6 - Creating simple UIs for GenAI with Streamlit
In this section we'll work with a graphical Python library, Streamlit to see how to quickly and easily create interactive interfaces like chatbots to use with our local LLMs.
Lab 6 - Building a chatbot with Streamlit
Purpose: In this lab, we'll see how to use the Streamlit application to create a simple chatbot with Ollama.
We will provide a way to run the environment for this and all tools using GitHub Codespaces. As long as you can use GitHub Codespaces, you should be all set. These are virtual environments that run on systems provided by GitHub/Azure and all functions can be accessed through the browser.
If you intend to use a corporate GitHub account, please make sure in advance that you can startup and run a GitHub Codespace with that account. You can log into GitHub and go to https://github.com/codespaces and start a new codespace from there to try this out. If you are not allowed via your company's policies to run Codespaces in GitHub, you will need to create/use a personal GitHub account. This will be using the public GitHub site github.com.
You will need a GitHub account and a browser. The use of Chrome is recommended since it seems to work better with copy/paste functionality within Codespaces.
If you prefer, you can install LM Studio and the other apps on your personal system, but we will not be providing directions and support for that in the workshop. Some operations may not work on your local system depending on the setup. The labs will be setup for the custom Codespace environment.
One of the more troublesome parts of creating good quality software is how we handle problems that interrupt the “happy path”. Exacerbating this problem, most of our design methodologies don't seem to offer much advice on the topic. In this session we'll investigate some general categories of failure and consider the macro behaviors that are appropriate to each and where–in terms of the call stack–those actions should be taken.
The discussion will compare the strengths and weaknesses of the most common ways of representing failure: sentinel values, exceptions (both Java's checked variant, and the more common unchecked approach), and the functional technique based on monads. Along the way, we'll consider the abstraction of error, the consequences of getting this design aspect wrong, and how the design of our APIs might make errors less common in the first place.
One of the more troublesome parts of creating good quality software is how we handle problems that interrupt the “happy path”. Exacerbating this problem, most of our design methodologies don't seem to offer much advice on the topic. In this session we'll investigate some general categories of failure and consider the macro behaviors that are appropriate to each and where–in terms of the call stack–those actions should be taken.
The discussion will compare the strengths and weaknesses of the most common ways of representing failure: sentinel values, exceptions (both Java's checked variant, and the more common unchecked approach), and the functional technique based on monads. Along the way, we'll consider the abstraction of error, the consequences of getting this design aspect wrong, and how the design of our APIs might make errors less common in the first place.
Apache Flink has become the standard piece of stream processing infrastructure for applications with difficult-to-satisfy demands for scalability, high performance, and fault tolerance, all while managing large amounts of application state.
The key to demystifying Apache Flink is to understand how the combination of stream processing plus application state has influenced its design and APIs. A framework that cares only about batch processing, or one that performed only stateless stream processing, would be much simpler.
We'll explore how Flink's managed state is organized, and how this relates to the programming model exposed by its APIs. We'll look at checkpointing: how it works, the correctness guarantees that Flink offers, how state snapshots are organized on disk, and what happens during recovery and rescaling.
We'll also look at watermarking, which is a major source of complexity and confusion for new Flink developers. Watermarking epitomizes the requirement Flink has to manage application state in a way that doesn't explode as those applications run continuously on unbounded streams.
You'll leave with a good mental model of Apache Flink, ready to use it in your own stateful stream processing applications.
Architecture is often described as “the stuff that's hard to change” or “the important stuff (whatever that is).” At its core, architecture defines the very essence of software, transcending mere features and functions to encompass vital capabilities such as scalability, evolvability, elasticity, and reliability. But here's the real question: where do these critical capabilities truly originate?
In this session, we'll embark on a journey to uncover the secrets behind successful architectures. While popular architecture patterns may offer a starting point, it's time to unveil the startling truth – both monolith and microservices-based projects continue to stumble and falter at alarming rates. The key to unparalleled success lies in the art of fine-tuning and tailor-making architectures to precisely fit the unique needs of your organization, environment, and the teams delivering the software.
Step into the future as we introduce a groundbreaking, problem-centric approach to defining and evolving system architectures. Our practical techniques will empower you to transform constraints, both architectural and environmental, into powerful enablers of robust, valuable, and long-lived software systems.
Join us and elevate your architecture game to new heights!
One of the features that distinguished Java from a majority of mainstream languages at the time it was released was that it includes a platform independent threading model.
The Java programming language provides core, low-level, features to control how threads interact: synchronized, wait/notify/notifyAll, and volatile. The specification also provides a “memory model” that describes how the programmer can share data reliably between threads. Using these low-level features presents no small challenge, and is error prone. Contrary to popular expectation, code written this way is often not faster than code created using the high level java.util.concurrent libraries. Despite this, there are two good reasons for understanding these and the underlying memory model. One is that it's quite common to have code written in this way that must be maintained, and such maintenance is impractical without an understanding of these features. Second, when writing code using the higher level libraries, the memory model, or more specifically, the “happens-before” relationship still guides how and when we should use these libraries.
This session presents these features in a way designed to allow you to perform maintenance, and write new code without being dangerous.
One of the features that distinguished Java from a majority of mainstream languages at the time it was released was that it includes a platform independent threading model.
The Java programming language provides core, low-level, features to control how threads interact: synchronized, wait/notify/notifyAll, and volatile. The specification also provides a “memory model” that describes how the programmer can share data reliably between threads. Using these low-level features presents no small challenge, and is error prone. Contrary to popular expectation, code written this way is often not faster than code created using the high level java.util.concurrent libraries. Despite this, there are two good reasons for understanding these and the underlying memory model. One is that it's quite common to have code written in this way that must be maintained, and such maintenance is impractical without an understanding of these features. Second, when writing code using the higher level libraries, the memory model, or more specifically, the “happens-before” relationship still guides how and when we should use these libraries.
This session presents these features in a way designed to allow you to perform maintenance, and write new code without being dangerous.
In this presentation, trainer and author Brent Laster will discuss the good, the bad, and the ugly of both GitHub Copilot and Codeium as AI coding assistants.
We'll look at the functionality offered, how they integrate in IDEs, the quality of results, cost factors, and other key aspects. The presentation will include demos of both tools in similar contexts.
In this example-driven session, we're going to look at how to implement GraphQL in Spring. You'll learn how Spring for GraphQL builds upon GraphQL Java, recognize the use-cases that are best suited for GraphQL, and how to build a GraphQL API in Spring.
Typical REST APIs deal in resources. This is fine for many use cases, but it tends to be more rigid and less efficient in others.
For example, in an shopping API, it's important to weigh how much or how little information should be provided in a request for an order resource? Should the order resource contain only order specifics, but no details about the order's line items or the products in those line items? If all relevant details is included in the response, then it's breaking the boundaries of what the resource should offer and is overkill for clients that do not need it. On the other hand, proper factoring of the resource will require that the client make multiple requests to the API to fetch relevant information that they may need.
GraphQL offers a more flexible alternative to REST, setting aside the resource-oriented model and focusing more on what a client needs. Much as how SQL allows for data from multiple tables to be selected and joined in response to a query, GraphQL offers API clients the possibility of tailoring the response to provide all of the information needed and nothing that they do not need.
Statistically speaking, you are most probably an innovator. Innovators actively seek out new ideas, technologies, and mental models by reading books, interacting with a broader social circle, and attending conferences. While you may leave this conference with the seed of an idea that has the potential to transform your teams, products, and organization; the battle has only begun. While, as a potential change-agent, you are ideally positioned to conceive of the powerful new ideas, you may be powerless to drive the change that leads to adoption. Your success requires the innovation to diffuse.
Fortunately there has been over a century of study on the topic of diffusion of innovations. Diffusing an innovation is difficult, but tractable and this session illuminates the path. You will get to the heart of why some innovations succeed while others fail as well as how to tip the scales in your favor. You'll leave armed with the tools to become a powerful change agent in your career and life and, ultimately, become a more powerful and influential person.
Making large, important technical decisions is a critical aspect of a software engineer's role. With the wide impact these decisions can have, it is essential to make the correct decision. Even more vital is ensuring the decision is made and communicated in a way that the team members impacted by it trust and buy-in to the decision. Otherwise, even the best decisions will never realize their full potential when executed.
This case study examines how Comcast has employed the Analytic Hierarchy Process (AHP), a decision-making framework developed in the 1970s, and adapted it for making technical and non-technical decisions both large and small. We will cover the key aspects that have made it successful for engineering teams, what we learned from our early mistakes, signs that the decision-making process you use is working effectively, and how you can easily leverage the AHP for your decisions.
By now, you've no doubt noticed that Generative AI is making waves across many industries. In between all of the hype and doubt, there are several use cases for Generative AI in many software projects. Whether it be as simple as building a live chat to help your users or using AI to analyze data and provide recommendations, Generative AI is becoming a key piece of software architecture.
So how can you implement Generative AI in your projects? Let me introduce you to Spring AI.
For over two decades, the Spring Framework and its immense portfolio of projects has been making complex problems easy for Java developers. And now with the new Spring AI project, adding Generative AI to your Spring Boot projects couldn't be easier! Spring AI brings an AI client and templated prompting that handles all of the ceremony necessary to communicate with common AI APIs (such as OpenAI and Azure OpenAI). And with Spring Boot auto-configuration, you'll be able to get straight to the point of asking questions and getting answers your application needs.
In this hands-on workshop, you'll build a complete Spring AI-enabled application applying such techniques as prompt templating, Retrieval Augmented Generation (RAG), conversational history, and tools invocation. You'll also learn prompt engineering techniques that can help your application get the best results with minimal “hallucinations” while minimizing cost.
In the workshop, we will be using…
Optionally, you may choose to use a different AI provider other than OpenAI such as Anthropic, Mistral, or Google Vertex (Gemini), but you will need an account with them and some reasonable amount of credit with them. Or, you may choose to install Ollama (https://ollama.com/), but if you do be sure to install a reasonable model (llama3:latest or gemma:9b) before you arrive.
Know that if you choose to use something other than OpenAI, your workshop experience will vary.
By now, you've no doubt noticed that Generative AI is making waves across many industries. In between all of the hype and doubt, there are several use cases for Generative AI in many software projects. Whether it be as simple as building a live chat to help your users or using AI to analyze data and provide recommendations, Generative AI is becoming a key piece of software architecture.
So how can you implement Generative AI in your projects? Let me introduce you to Spring AI.
For over two decades, the Spring Framework and its immense portfolio of projects has been making complex problems easy for Java developers. And now with the new Spring AI project, adding Generative AI to your Spring Boot projects couldn't be easier! Spring AI brings an AI client and templated prompting that handles all of the ceremony necessary to communicate with common AI APIs (such as OpenAI and Azure OpenAI). And with Spring Boot auto-configuration, you'll be able to get straight to the point of asking questions and getting answers your application needs.
In this hands-on workshop, you'll build a complete Spring AI-enabled application applying such techniques as prompt templating, Retrieval Augmented Generation (RAG), conversational history, and tools invocation. You'll also learn prompt engineering techniques that can help your application get the best results with minimal “hallucinations” while minimizing cost.
In the workshop, we will be using…
Optionally, you may choose to use a different AI provider other than OpenAI such as Anthropic, Mistral, or Google Vertex (Gemini), but you will need an account with them and some reasonable amount of credit with them. Or, you may choose to install Ollama (https://ollama.com/), but if you do be sure to install a reasonable model (llama3:latest or gemma:9b) before you arrive.
Know that if you choose to use something other than OpenAI, your workshop experience will vary.
With much of the industry finally migrating to Java 11, 17, or 21, it’s time to learn about many of the newer features you can use in your code. None of the changes since Java 8 have been as dramatic as the move to functional programming, but collectively the latest capabilities can really streamline the way you work. This talk summarizes several of them, like records and record patterns, sealed classes and interfaces, switch expressions, the HTTP client API, pattern matching for switch, and more, using them together in an app to see how they interact and improve your Java coding experience.
This talk summarizes several of them, like records and record patterns, sealed classes and interfaces, switch expressions, the HTTP client API, pattern matching for switch, and more, using them together in an app to see how they interact and improve your Java coding experience.
In the rapidly evolving landscape of artificial intelligence, the ability to create custom Generative Pre-trained Transformers (GPTs) is not just a technological breakthrough but a gateway to new opportunities and unforeseen challenges. This talk delves into the simplicity and intricacies of crafting bespoke GPT models using OpenAI's playground. We'll explore how easily one can load personal data into these models, transforming generic AI into a tool tailored to specific needs and interests. However, this ease comes with a caveat – currently, these custom models are accessible exclusively to premium OpenAI account holders, pending the launch of a wider distribution platform.
The presentation will further illuminate the dual nature of custom GPTs: While they are augmented with user-provided data and instructions, their responses are still deeply rooted in their original, extensive training datasets. This inherent characteristic can lead to outputs that might not align with the creator's intentions or expectations, presenting a unique set of challenges.
Additionally, the talk will contrast the relative simplicity of using the playground for custom GPTs with the more complex, yet potent AI Assistants API. This API offers a more intricate approach, requiring coding expertise but providing the flexibility to integrate AI capabilities directly into applications. It shifts the paradigm from an external tool to an integral component of user-developed software.
This presentation is the Dagobah of efficient editing and flow. Come only what you take with you.
Most efficient you will be, when keyboard tricks learned. You'll see. Hmmmm. You must unlearn what you have learned. A Jedi's power comes from knowledge of the tools used. Luminous beings are we… not crude typists. Mouse is your weakness. Learn to use more of the keyboard, you will.
Learn: