Disinformation vs Misinformation - How to be an Information Detective with Dr. Ryan Hassan
Listen to Boost Oregon's Diversity Program Manager Ari O'Donovan, and Medical Director Ryan Hassan, M.D., M.P.H
On this podcast we delve into disinformation, misinformation, their relevance to vaccines, and why critically thinking about the world around you has never been so important.
Listen Now
Our Host
Our Guest
Medical Director Ryan Hassan, M.D., M.P.H
Ryan Hassan, M.D., M.P.H., is a board-certified pediatrician working at Oregon Pediatrics in Happy Valley, where he lives with his wife, Christen, daughter, Olivia, and chihuahua, Luna Joe. His professional work centers around improving the lives of children by promoting preventive healthcare and healthy lifestyle choices at the individual and community levels and by empowering physicians to become better advocates for their patients. Dr. Hassan is an avid outdoors enthusiast and enjoys running, biking, swimming, hiking, backpacking, climbing, snowboarding, and kite surfing.
Transcript:
Ari O’Donovan (00:00):
Thank you so much for listening to boosting our Voices. This program has been brought to you by Boost Oregon. You can find them online@boostoregon.org.
Ryan Hassan (00:13):
It is possible to create an online space that is democratically governed, that is actually a true meritocracy where the best ideas do come to the top because all ideas are fairly representative where everyone's voice can be heard.
Ari O’Donovan (00:31):
Welcome back to Boosting Our Voices. I am Ira O'Donovan and I am Boost Oregon's Diversity Program Manager. We are back bringing more information and topics of discussion to Bipo communities that Bipo communities want to know about. Dr. Ryan Hassan, we have as a guest today, so earlier this summer in August, you and I gave an outstanding presentation for the 2022 Northwest Immunization Conference. Boost Oregon doesn't usually give, so I was really excited to share that one. And it was called Dealing with Disinformation. Now that's a title that some people who, especially people that didn't see or hear that presentation might not know a lot of information about. So could you tell us a little bit about what is the difference between disinformation and misinformation?
Ryan Hassan (01:28):
Yeah, so these terms have become a lot more widely used and well known in the last few years as disinformation has become a bigger and bigger problem at large in our society and they're often using interchangeably. There is a difference. Misinformation is information that is not completely accurate, either misleading or because of its incomplete lists or sometimes just modified or not quite true or just outright inaccurate. It's information that's spread mistakenly. It's not intended to deceive. It does so by mistake or inadvertently disinformation is that same kind of information, but it's spread with the intent to deceive. So the main difference between the two is the intention of the person spreading it. So it can sometimes be hard to know, but we tend to talk about disinformation or I do when I talk about for example, vaccines because a lot of the wrong information that is being spread prominently is done very much with the intent to deceive people.
Ari O’Donovan (02:36):
That's some really good information that is foundational that we would need before we could get into any further discussion of this topic. And I know that I've been on websites buying things online and sometimes a little box will pop up that will say, Do you want help with your purchase? Do you wanna know more about the item that you'd like to buy? And before we gave this presentation, I really didn't know that that is a form of a bot. Can you tell us, they really are everywhere. Bots are really everywhere. Can you tell us what a bot is and what computational propaganda also is?
Ryan Hassan (03:17):
Yeah, so I mean a bot is any kind of software really that is designed to, I suppose there's probably many different definitions you could use. But the way I think of 'em when it comes to, we're talking about how to distinguish between information and and miss and disinformation, it's a software designed to interact with a person. And the thing about bots is that's not often clear that they are, they are bots. A lot of bots act very much like humans, sometimes intentionally so and sometimes can be very deceptive. A lot of times in popular media, people talking about smart bots, artificial intelligence and you know, the very innovative ways that AI bots can be used to manipulate or deceive or to impersonate or give the impression of interacting with a human. But the reality is right now there's not a lot of AI bots, Most bots that we're enacting with like bot that popped up and chatted with you, those are what we call dumb bots.
Ryan Hassan (04:16):
They don't really have any kind of artificial intelligence. They operate on very simple algorithms, uh, and simple coding. But even those can be very effective at maintaining a basic conversation with someone. And in the world of computational propaganda, bots are often used to falsely or disproportionately amplify the messages that the people who wrote them or who paid for them are trying to promote. So you might have, you know, 10,000 Twitter bots, which is just basically fit false Twitter accounts that are programmed to retweet whatever you tweet every number of minutes or to comment on you know, content and to amplify it or even to negatively interact with other people's content in order to disenfranchise them. So these are ways that bots have been used in more malicious ways. Now I mentioned computational and propaganda. So this is a broader term to kind of refer to collectively the many different ways that we can see technology being used, uh, weaponized to manipulate our decision making. Formal definition of it would be the use of algorithms, automation and human curation. So these are things like bots we talked about to purposely distribute misleading information over social media networks. So it's a pretty all-encompassing term to describe the various ways in which we're seeing malicious actors spread the messages they want that are in their best interests with the intent to deceive the people they're trying to communicate with.
Ari O’Donovan (06:01):
That's pretty outstanding and in a pretty bad way that you could take something so simple as a bot I'm communicating with about a product. I'd like to purchase that for a very brief time. I actually thought was a real person <laugh>, but a lot of people I'm sure felt that way too. It turns into something that gets weaponized on social media and they can proliferate messages thousands of time and this is just one bot. So if you have more than that and someone is operating them in a negative way to hurt communities, it can be pretty, pretty terrible.
Ryan Hassan (06:42):
Definitely true. I mean there's a lot of examples of specific events where this has already happened, but even in general the rise of computational propaganda has been something that has eroded a lot of public trust in many of our institutions. Even within our very democracy here in this country. It's become a problem that we really need to start to try and find ways to contend with.
Ari O’Donovan (07:09):
Absolutely, I think knowledge is definitely the first step because I think a lot of people are just plain unaware of how this happens to the extent at which it happens and the communities that it affects disproportionately. So I think that's a really good place to start. But before we can get into any of that, I would like to know more information about human decision making. When we arrive at a decision, a lot of people think we do so logically and objectively and that's all that's happening when we come to a decision. Is that really the case though?
Ryan Hassan (07:46):
Uh, no. No, not at all. So this is a really good question. You know, this is how I used to think about human decision making, including my own. I would say, okay, if I'm making a choice, I would just look for the information I need to to make my choice and then I would use that to make a choice from trying to answer a question. I just find the information and that information will answer my question. So, you know, if this were true, then all it would take for people to become comfortable with making a a decision to vaccinate would be to look at all the information and say, Okay, here's the information. Clearly there's huge amounts of information guaranteeing and testing to the safety and effectiveness of vaccines and there's nothing credible to refute that therefore we're gonna vaccinate. But that's not what happens because we don't make decisions logically.
Ryan Hassan (08:33):
We make decisions emotionally. This is how we've always made decisions. It makes sense because it's much easier to make a decision emotionally if you take the time to think about every decision you make. You know, from a historical standpoint, if you are in, you know, a hunter gatherer society, you can't stop and think about what was that rusting in the bushes? Is it something I should run from? Let me, let me think about the the options. You have to have an immediate reaction and you have to be able to act on that. And so this is how we've been programmed to make decisions and what often happens is if we have a question we want answered, we usually already have an idea of what our answer is. We have a belief or assumption about what the answer should be. And when we're looking at information to try to come to a decision, what we really are doing is trying to justify what we already believe.
Ryan Hassan (09:23):
It takes a lot of very conscious effort to realize that we already have something we want to believe and therefore we need to be conscious of that when we're looking information and to determine to be careful about how we're interpreting that information. Because otherwise we engage in a process called motivated reasoning where information you see that you agree with, you're gonna remember and you're gonna think is valid and it makes sense and you're gonna say, Yeah, that of course that makes sense to me. And information that you don't agree with, you're gonna be less likely to even look at or engage with or to really actually try to understand you're gonna be more likely to just dismiss or not just forget about or sometimes you might even completely reinterpret it in order to fit with what you already believe. This is how we operate on a daily basis.
Ryan Hassan (10:12):
It's not as though this is a problem, it's not as though it's something that is wrong with us, it's just the way that our brains work. The problem is when we're unaware of this and if we try to fool ourselves into thinking that we are rational decision makers, then we can over rely on this process that we're is built into us and not be aware of the logical fallacies and the cognitive biases that we're engaging in every time we make a decision. That I think is why it's important to be aware of this the way that we think. So we can be more proactive when we're making decisions. Say, okay, you know, what do I want to believe and how is that affecting, you know, what information I'm looking at and how I interpret that.
Ari O’Donovan (10:54):
Yes, I feel like a lot of it too is, and you mentioned this a little bit, it's just easier to not critically think about things and to not focus so much on details and find out how is this information being presented? Is it accurate, is it true? Is it fair? To do that all the time for people is exhausting. So it's just easier I think for people to just not have to do that and just accept things that are already in alignment with their current belief system.
Ryan Hassan (11:28):
It's absolutely right and you know, and most of the time through most of our lives that works just fine. And in the past it worked even better. But as we're coming into a more nuanced and technologically connected global society, there are issues that require a little bit more deliberation for us to come to conclusions that are in line with our own best interests. And this is especially true in light of issues where there are very powerful interests working to achieve their own ends by intentionally deceiving us. So it's an unfortunate reality that we are gonna have to contend with. But you're right that you know, it's too much to ask of an individual person, say, Look, you have to do better at thinking the way you know logically and questioning every assumption, you know, we should teach people and encourage people to do this. But also the onus can't be completely on us as individuals to undo or correct for the problems that are created by much larger institutions that allow for this kind of manipulative tampering of our information streams to be so unchecked as they are now.
Ari O’Donovan (12:39):
Exactly. And that's one of the biggest topics I feel we discussed in our presentation. People do need to engage in critical thinking and sound reasoning, but it's not just on the individual. There are bigger and greater aspects of this problem that need to be known and we need to do something about them. It's much bigger than just the individual person coming across an article on Facebook.
Ryan Hassan (13:10):
Yeah, that's exactly right.
Ari O’Donovan (13:12):
So can you tell me a little bit more about how disinformation works and do you have an example that you can share?
Ryan Hassan (13:20):
Yeah, so there's a lot of examples that I share routinely in the work that I do both in clinic and through boost. There's a lot of common tactics that people spreading Disformation will often use. You know, there's things you can look for that might help you identify if something is being represented accurately or not. You'll often see that there's, you know, information is given without a source or what often happens too is you can just give a source that's a link on, you know, if you're online your source is a link to a website, but the website is either a broken link or it's another site created by the same individual or the same organization. So you have circular referencing that doesn't actually give you a primary source. Information's often incomplete or just being misrepresented. So a common example of disinformation is, I mean I see this so frequently, I saw even today when I was putting together some content for some of our newer boost materials is we talk about vaccines being, you know, one of the most important factor in reducing and eliminating the burden of vaccine preventable diseases.
Ryan Hassan (14:31):
So what is very popular, there's a graph that's online if if you search for, you know, vaccine preventable diseases over time, something like that, you know, one of the, if you do an image search on Google, one of the most common things you're gonna find is gonna be variations of this graph that shows the rate of death over time from you know, some early 1900 or so until today from different vaccine preventable diseases. And you'll see that graph starts up really high. A lot of people were dying of measles and pertussis and pneumonia hemophilus. And over time that graph just kind of went down and it just was this gradual drop, you know, over the first several decades of the 20th century. And they'll often show, well actually vaccines for each of these diseases weren't introduced until later. Therefore you can see that the vaccines had nothing to do with the drop in deaths and therefore vaccines aren't necessary or not helpful.
Ryan Hassan (15:27):
This is commonly used and it's very manipulative and it is very convincing to people who don't know what they're looking at. And the problem is it's not, the information they're sharing is not actually false. It is true that deaths from vaccine preventable diseases were going down well before vaccines for these diseases were available. And that is because we learned how to treat these diseases with things like antibiotics and steroids and surgery and intubating people and putting them on ventilators and all other sorts of very expensive and burdensome medical interventions that have lots of complications. So, you know, we were able to keep people alive, we were able to get them back to, you know, sometimes being healthy, sometimes having some brain damage or maybe a little dismemberment or lung disease, but we were, you know, not letting them die from the diseases. What vaccines did is they allowed us to keep people from getting sick in the first place.
Ryan Hassan (16:27):
So they're omitting that information deliberately because it's not as though if we didn't vaccinate we would suddenly go back to the death rates of the 19 hundreds. We would still be able to treat a lot of people from these diseases, but people would be getting sick a lot more and you would be seeing all these other complications of getting sick. So that's the key point that is missed. In this example, they're telling you, well vaccines don't actually save lives because no one was dying from vaccine preventable diseases anyway in the United States. And they're omitting the fact that they actually prevented a whole lot, millions upon millions of unnecessary illnesses and complications of those illnesses. And of course most of those graphs are only using US data and don't account for or edifies minimize the fact that actually lots of people were still dying from vaccine preventable diseases in other parts of the world.
Ryan Hassan (17:20):
And vaccines are the most reliable way to reduce even the smaller number of deaths that were happening in addition to those other infections. So I think that's one of the most prominent examples I see a lot when it comes to the work I do, where it's, you know, the people who create this content know that they're deceiving people because they know that that's not an accurate representation of the reason vaccines are helpful, but they're just choosing not to share that, to send a more manipulative message that's in line with what they would like to share.
Ari O’Donovan (17:51):
Yeah, and I've seen those graphs online and they really are truly misleading and they look like they make sense. It seems like it's well presented, but you really have to, you really have to do that critical thinking and I think people can do what is, I'd like to call selective critical thinking to where you can focus on something that you know is commonly not presented correctly or accurately and then really think about the information and consider the source. I think that's really important. I don't think we have to do such advanced critical thinking on everything, but just important things, really important things like this that maybe you weren't aware of prior.
Ryan Hassan (18:36):
Yeah, and I, I think what I would add to that, I mean I agree with, and I would add that I think if you are reading something that you agree with, then it's more important that you are critical of it. You're automatically gonna be skeptical of things you don't agree with. But if you're reading something that really speaks to you and you're like, Ah, this makes perfect sense, this you know, is what I believe in, then you are much more at risk for being tricked or manipulated because it's much easier for us to believe things we want to believe even when we're being presented with information that's not accurate. You know, the problem with, you know the example I gave with this graph of mortality is, you know, they gave real information, they're giving actual facts and they source it. They're like, Here, this is where these numbers come from.
Ryan Hassan (19:20):
And you can verify and say, Oh that's true. And if you don't know what you're looking for then it can be really hard to figure out where the missing piece is or whether there's a missing piece. And even many healthcare providers that I share this example with, they aren't able to tell me why this graph is wrong. A lot of us just assume that it's wrong. Like, well this doesn't make sense, I know vaccines save lives so this graph must be wrong. And again, that's the same problem. They're dismissing it because they know this doesn't make sense with what they already know, but we don't know how we can come to that conclusion. And that's the important thing because we have to be able to understand where the missing information is or where the law gap in reasoning is and that it just takes practice. But you know, fortunately a lot of times the work's already been done. I mean you can Google, you know, debunk whatever you're looking into and someone will often have fact checked it for you and said actually this is why this particular piece of information is wrong and here's the whole story. So we're fortunate that there is access to the right answers a lot of the times if you have the time and the wherewithal to look for it.
Ari O’Donovan (20:29):
And that's what we tell people at Boost when they're questioning, they come across a source and they're thinking, is this good information that I can really use or is there something that I'm missing? Is there something that's being left out of this information? And a great way to make that decision is to search, debunk whatever it is that you're trying to find information on. So can you tell us more about how disinformation disproportionately impacts bipo communities? I think this is the most horrible and insidious way that disinformation is used and it really does have a great impact.
Ryan Hassan (21:07):
Yeah, I could talk for so long about this. There's so many ways and reasons why communities of color are further marginalized by the spread of disinformation and computational propaganda. You know, I think one thing is just in general computational propaganda. This is a tool that is used by people in power to maintain power. So by default the communities who have been marginalized already are gonna be further marginalized by a system that's whole purpose is to maintain the status quo or to advance the power of the powerful. I think a really good example of this is the issue of voting rights. There is this very huge myth in America of voting fraud or voter fraud. You know, that we need to make sure elections are secure cuz voter fraud is rampant so we need to make sure we have good voter ID laws and all of these things and and it's all nonsense, it's all a smoke screen.
Ryan Hassan (22:03):
There's actually no such thing as systemic voter fraud. America has some of the most secure elections in the world and including the 2020 election, which was probably the most secure election in the world. But this unfounded claim of voter fraud is being spread with the methods of computational propaganda to get buy-in to policies that specifically disenfranchised voters of color that have helped to further cement the status quo. One, I think huge example another is, you know, when we getting back to vaccinations, the fact that communities of color all are already at higher risk from, I mean really all causes of morbidity, mortality there. And with Covid for example, black and brown communities are two to three times higher risk for hospitalization from Covid and from death from covid. These are things that are only further exacerbated if we have disinformation that is discouraging our communities from getting vaccinated.
Ryan Hassan (23:05):
You know, we've talked previously about the Tuskegee syphilis trials as an example of why I shouldn't call 'em trials. They were experiments as an example of why black communities in particular might be very distrustful of me, the medical establishment. And that is being capitalized on by, you know, groups such as the disinformation doesn't who reference that as an example of look how bad medicine is and use it to justify their own unfounded claims of also vaccines are bad for the same reasons and specifically target these communities and use their prior traumatic experiences in the healthcare system to get their buy-in to misinformed ideas about vaccines now. So those are a couple ways that it's been disproportionately target to minority communities. The another example that I think is important to talk about is even as we've tried to take steps to correct for disinformation, this has also had a disproportionate impact on communities of color.
Ryan Hassan (24:11):
So we shared in our talk the fact that social media platforms like Facebook have created algorithms to try to take down disin inform content in the wake of Russian interference and election campaigns. They said, well you know, the Russian interference was based on trying to stoke racial tensions in America. So we are gonna try and take down anything that could be construed as you know, political advertising and they created algorithms that identify any kind of ad on Facebook that mentions race that says, you know, a black or Hispanic Latin X or even LGBTQ gender identities as well. Any mentions of those are automatically targeted and labeled as political advertisements and taken down. And so this has led to, for example, advertisements by a university trying to recruit a a professional in a black feminism and studies I think was the ad they were recruiting for.
Ryan Hassan (25:14):
And that was taken down, there was a line of black dolls with black skin color so that children of color could see themselves reflected in their toys, which is a very important thing for kids to be able to do. That was taken down because it said these are, you know, black skinned dolls. And so it was identified this way. So the systems that Facebook was using to try to address the problem disproportionately targeted the people who were already being most harmed by the problem they were trying to fix. So as I said, I could go on for so long, but there's so many ways in which this problem, even though it does affect all of us, it also harms us most I think by further widening the gap in outcomes and access that exists in this country between people who are white and people who are black or brown or from other marginalized communities.
Ari O’Donovan (26:05):
Yeah, the information that you just shared was one of the most important topics I think was part of our presentation and it's something that's really important to me being a black woman. I know family members who still are not ready to accept covid 19 vaccines out of fear, thinking that they have chips in them that are implanted in people's bodies, these kinds of things. And they found this information on social media. So this is real and it's happening and it really does disproportionately affect bipa communities, black and brown communities. It's really high tech to me cuz a lot of people really have no idea just how involved some of these tactics are.
Ryan Hassan (26:53):
I mean it's been going on for decades but it's been very much accelerated in the last several years. I think one piece of good news is that w we are generally starting to become more aware of this and trying to take some corrective actions. Hopefully we can find ways to do more and accelerate that process. I mean we'll have to see. It'll take a lot of work from everyone.
Ari O’Donovan (27:17):
Absolutely. And I think knowledge is definitely the very first step. And that leads me to my next question. What can community members do to fight back? Is there anything that we can do? Is knowledge the first step? Is it the best step? Is there anything else that can be done?
Ryan Hassan (27:35):
Yeah, so you know, it's not as though there's a quick fix or an easy one and the reality is, you know, as you said earlier, it is more than just what any one of us can do. We really need structural systemic change to be able to address these problems. Cause these are problems that are, you know, started and perpetuated by giant corporations, Google and Facebook, Amazon, that are allowing for these spread of computational propaganda cause of the systems that they've developed that have now been essentially hijacked by people with malicious intentions. So I don't have great answers. I will say definitely knowledge is essential. People need to be aware of how much of a problem this is and the many ways that it can affect our decision making on a daily basis in the many ways we might be targeted now and in the future.
Ryan Hassan (28:29):
There's an excellent book I could recommend to all of our listeners called The Reality Game, How the Next Wave of Technology Will Break the Truth. It's by Sam Wooly. I read it in preparation for the talk that we gave and I found it horrifying, but extremely helpful and enlightening. And he talks not only about things we've seen in the past and are seeing now, but things we can expect in the future with, as things like AI bots become more prominent or deep fake technology comes more prominent and and virtual reality and augmented reality platforms become new frontiers for spreading misinformation. And I think these are things that are important to be aware of because they're going to be happening in our lifetimes and it's very possible, not possible, very likely that there will come a time where all of us are gonna be very, very vulnerable to making ill-informed decisions even in areas of our expertise if we're not careful about the ways that we are prone to being fooled.
Ryan Hassan (29:32):
So I would definitely encourage everyone to check out Sam Willy's book and learn more about this information. Uh, he does go over some of the potential ways we can address this, but beyond that, I think, you know, partnering with community organizations and working together to find ways to, for example, hold our giant software firms accountable. You know, one very simple solution that I think would be fantastic is to require the organizations that we are forced to do business with. Our social media companies, our search engines, our shopping engines like Amazon, um, our cell phone companies, you know, require them to have terms of service that are actually legible, that are understandable. Instead of 30 pages that no one reads, that we're all signing away. Have them be intelligible, brief, informative documents that we can actually hold them accountable and say, this is what we're gonna do with your data and how we will use it and how we will and will not share it and place restrictions on that and hold them accountable to that.
Ryan Hassan (30:30):
I think we need to find ways to have a publicly funded and independent journalist society or journalistic outlet in this country where we can have reliable publicly available information that is well funded and doesn't have to go after, you know, sensationalist seeking headlines in order to stay afloat. I think we need to find ways to fund and support our fax checkers. The sources like Wikipedia, which you know, Google now relies on for free to fact check in its searches, you know, need to make sure that these types of often non-profit organizations are funded to be able to do the work we need them to do and that we're acquiring them to do. And also that they're well equipped to do so. I mean, I mentioned Wikipedia, but a problem we mentioned with them in our talk is that their editors are predominantly straight white men.
Ryan Hassan (31:23):
And so, you know, as a result, for example, most of their biographies are about men in a very small proportion about women. So we need to have representative fact checking as well. So I mean those are a few of the things that I think would be good starting points. We have to be careful as we're trying to implement change. We can't simply say, you know, disinformation is illegal and will be taken down from social media and everything because of the problems I mentioned earlier. Someone has to enforce that and it's hard to make sure that person is doing so fairly and even if they are trying to do so fairly, those are some of the things that come to mind. I did read a really interesting article recently that talked about, you know, we could try to protect original speech on social media platforms, but depl amplify it so you know, there's no reason we should say that people are, have a right to have, you know, their message spread millions of times, especially if it's being spread by bots, by fake accounts.
Ryan Hassan (32:23):
I, in fact, I don't think anyone should have that. Right. I think it's, you know, intentionally deceptive and it manipulates our information streams in harmful ways as we've seen. So I think that's maybe an area where there could be some work to be done. It's gonna be a long problem to, and a long solution that we have to try to come up with. I will add, one of the things, the points that Sam Wooley talks about is, you know, a goal that we can aspire to, which I thought was really nice and a point of hope is that we could, it is possible to create an online space that is democratically governed, that is actually a true meritocracy where the best ideas do come to the top because all ideas are fairly represented and where everyone's voice can be heard. And it's not just the person who has the most funding or the most bots who's able to get their message across better than everyone else. I think that is what I envision for a fair and equitable internet. And I think it is crucial to fixing our flow of information so we have information streams we can rely on where we're not crippled by having to decipher, you know, hundreds of pages of disinformation to find the truth when we're trying to answer one of our questions we have.
Ari O’Donovan (33:43):
Thank you for that glimmer of hope, cuz I don't want people walking away from listening to this episode and thinking, well, I'm knowledgeable, I can learn more. But I feel like there's no solution to this. We're just doomed to live in this world with all this happening online. There is some
Ryan Hassan (34:01):
Hope. Yes, I think there definitely is. I think we can make the difference. We need to commit to it and we need people to be aware and, you know, push for change.
Ari O’Donovan (34:10):
Certainly. Thank you for sharing all the information that you have with us, Ryan, That presentation, I said it once and I'll say it again. It was amazing. It was a great opportunity to share information with communities that most people may not know about or don't think about.
Ryan Hassan (34:28):
Yeah, I agree. It was a lot of fun to do. Hopefully we can continue spreading the word about this problem as we keep working with Boost.
Ari O’Donovan (34:34):
I'd love to see that happen more frequently in upcoming projects that we do. I'd love to collaborate with new organizations where we can continue this type of education and work.
Ryan Hassan (34:47):
For sure. Me too.
Ari O’Donovan (34:49):
Wow. There will be more upcoming episodes with you, Ryan. I look forward to those. And thank you for being a guest on this episode.
Ryan Hassan (34:56):
Yeah, no problem. It was fun. Look forward to talking again.
Ari O’Donovan (35:03):
Don't be a stranger. Email us or send us a voice memo at boosting our voices gmail.com with your health related questions. Your questions may even be featured on an upcoming podcast episode. Follow Boost Oregon on Instagram, Facebook, Twitter, and TikTok. You can find all of our social media and our website information in the show description below. Until next time, thank you for listening and be well.