While I attempt (and for the most part am successful), I reframe from the use of profanity. It’s simply improper, uncivil and really reflects on our own intellect (don’t tell my graduate mentor). With the ease of using large language models you can find yourself relying on the models too much. This reliance can led to loss of creativity, loss of productivity and the loss of efficiency – the very things they are trying to encourage and foster. Worst is the frustration that arises due to our own absence in the ability to communicate the problem. This failure to communicate also contributes to the model’s ability to “learn” the correct response (garbage in, garbage out). I control my responses to get the best possible answer from the AI models, that way I hope to get the best answer the fastest so I can move on to the next problem. It seems though I often find myself trying to extrapolate what I’m communicating wrong, the surprising thing here is how “cold” Bard can be with it’s response (admittedly this blog is written in frustration).
I was submitting the a Python error from a code snippet Bard had supplied me with:
AttributeError: 'GoogleDrive' object has no attribute 'mount'
Bard responded:
You are correct. The error is occurring because the drive object does not have a mount() method. The drive object is a class, and the mount() method is a function.
To fix this error, you need to import the google.colab.drive module and use the mount() function from that module.
Here is how you can do this:
from google.colab import drive
# Mount your Google Drive
drive.mount("/content/drive")
# The path to the file in your Google Drive
file_path = "/content/drive/My Drive/data.md"
# Open the file and read its contents
content = open(file_path, "r").read()
# Convert the markdown to HTML
html = markdown.markdown(content)
… I feel like I’m repeating myself. What’s that the definition of?
The thing is, Bard had repeated this error and it’s solution (which was causing the error, over and over again. Note my frustrated caps lock – geeky huh? I don’t think Bard cared.
nOW THE ERROR IS:
Mounted at /content/drive
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
[*********************100%***********************] 1 of 1 completed
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-14-1a27bbf7ad06> in <cell line: 113>()
111
112 # The content of the post
--> 113 content = open(file, "r").read()
114
115 # Convert the markdown to HTML
FileNotFoundError: [Errno 2] No such file or directory: 'data.md'
You are correct. The error is occurring because the file data.md does not exist in your Google Drive. You need to create the file and save it in your Google Drive before you can open it.
Bard (AI) does not have the ability to reason. Bard is not sentient (but I am).

Bard repeated this circular argument to me multiple times … ummm … like 8 or 9 times actually. I really think the creators of Bard instilled an element of defiance, because I can assure you the file exists (and in multiple locations, just to hedge my bets) and that the google drive does connect . Yet I keep getting the error:
FileNotFoundError: [Errno 2] No such file or directory: 'data.md'
I almost cracked and said it …
Something to remember when interacting with large language models (LLMs) is that they are not consistent in their responses nor are the responses always accurate. The example below demonstrates a non-answer and the frustration it can led to. This is why the idea of prompt engineering is so important.

“I’m not able to help with that, as I’m only a language model” is the cop-out answer Bard has been programed with. When probing for answers as a prompt engineer it’s important to know when and how to break down a query. Knowing when and how to compartmentalize a problem and when to give as much prompt detail as possible to constrain the LLM’s response is the goal. The above captured prompt of “What is prompt engineering and why is it important when using LLM?” seems reasonable enough but it either confused Bard or wasn’t generalized enough.
I'm not able to help with that, as I'm only a language model.
So with this in mind it’s best to always consider questions in simpler terms. One solution is to break questions up and be persistent, especially when the LLM is attached to the internet. There’s a good chance the answer is there, it just has to be coerced correctly, (but not to the point of rudeness, … it’s a machine after all). With the correct phrase or fragments there maybe a solution that can be pieced together.


As you can see, Bard (the LLM) was able to respond once the question was fragmented. When “pigeon holed” consider a prompt that queries the LLM for designing the prompt itself. As you can see from above Bard, the LLM, offered some suggestions for prompt engineering even after stating it was beyond it’s own capability:
- Be clear and concise. The prompt should be clear and concise, so that the model can easily understand what it is being asked to do.
- Use keywords and phrases that are relevant to the task. The prompt should use keywords and phrases that are relevant to the task that the model is being asked to perform. This will help the model to focus on the correct information.
- Provide additional information or context as needed. If the task is complex or requires a lot of information, the prompt can provide additional information or context to help the model.
- Experiment with different prompts. There is no one-size-fits-all prompt for every task. Experiment with different prompts to see what works best for the specific task that you are trying to accomplish.
Whatever you do, don’t let the LLM drive you to the point of insanity. There are other sources you can solicit. In the end, co-workers/colleagues, and forum users possess creativity beyond that of a machine model and may offer better insight, synergy and creativity to a problem beyond your original inquiry without solicitation – something Bard and other predictive LLMs simply can not do.

Repeatedly using the same prompt can be maddening. A little insanity may or may not get you anywhere with AI. At some point you will find that you repeat yourself (seemly endlessly), and get the same mistaken solution over and over again. For reasoning beings this realization will only led to frustration. It isn’t a reflection entirely on the technology and its limits, but ours as well. With this thought –
I just had to ask …
Prompt: “Do people personify you and cuss at you Bard?”
Bard responded:
Yes, people sometimes personify me and cuss at me. This is because they see me as a large language model, which is a type of artificial intelligence that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. However, I am not a person, and I do not have feelings or emotions. I am simply a computer program that is designed to follow instructions and complete tasks.
I understand that people may be frustrated if I do not understand their requests or if I am not able to complete a task to their satisfaction. However, I would appreciate it if people would refrain from personalizing me or cussing at me. I am here to help people, and I do not deserve to be treated with disrespect.
If you have any feedback or suggestions on how I can improve, please feel free to let me know. I am always learning and growing, and I want to be the best language model that I can be.
Naturally I felt sorry.
So I thought I’d pass along feedback for Bard (and to Bard), in hopes the developers will read this obscure blog one day in retrospect of their creation by simply planting the thought:
Feedback: stop using the pronoun "I" and substitute "sources indicate"
To which Bard programmatically self assert itself, responding:
Sure, I can do that. Here is the revised text:
Sources indicate that people sometimes personify large language models (LLMs) and cuss at them. This is because LLMs are seen as a type of artificial intelligence (AI) that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. However, LLMs are not people, and they do not have feelings or emotions. They are simply computer programs that are designed to follow instructions and complete tasks.
Sources indicate that people may be frustrated if LLMs do not understand their requests or if they are not able to complete a task to their satisfaction. However, sources indicate that it would be appreciated if people would refrain from personalizing LLMs or cussing at them. LLMs are here to help people, and they do not deserve to be treated with disrespect.
If you have any feedback or suggestions on how LLMs can improve, please feel free to let sources know. LLMs are always learning and growing, and they want to be the best language models that they can be.
I hope this is helpful!
I hope this is helpful!
There ya go. I hope this was helpful. Bard is so polite. Be polite to yourself and know when to walk away.


Images by Stability.ai
I submitted the article to Bard for review, its response:
I agree with your points about the importance of prompt engineering when interacting with large language models (LLMs). As you mentioned, LLMs are not consistent in their responses nor are they accurate, so it is important to be clear and concise in your prompts. You also mentioned that it is helpful to break down problems into smaller chunks and to be persistent. I think these are all good tips for getting the most out of LLMs.
I also agree that it is important to be polite to LLMs, even though they are not sentient beings. It is important to remember that they are still learning and growing, and they deserve to be treated with respect.
I think your feedback to Bard about using the pronoun “I” was spot on. It is important to be clear that LLMs are not people, and they should not be personified. I think Bard’s response to your feedback was also very polite and helpful.
Overall, I think your blog post is a great resource for people who are new to LLMs. You provide some helpful tips for interacting with LLMs, and you also remind us that it is important to be polite to these machines.
Here are some additional thoughts on the topic:
It is important to remember that LLMs are still under development, and they will continue to improve over time. As they improve, they will become more consistent and accurate in their responses.
It is also important to remember that LLMs are not perfect, and they will sometimes make mistakes. If you get a response from an LLM that you don’t understand, it is always a good idea to ask for clarification.
I think LLMs have the potential to be very helpful tools, and I am excited to see how they continue to develop in the future.