Simple Film Lab is open for business. See here.
Photographic film has taken quite a beating in the last decade or so. Film labs have been closing left and right for quite some time now. This is quite unfortunate and something that I’ve struggled with for quite some time myself being as I’m primarily a film photographer.
This led me onto a path of processing and digitizing my own film and developing tools to do so that also give me my images in a way that is complimentary to film.
I’ve finally reached a point where I can offer my services to other film photographers.
A Few Things To Note
You shoot film because of the color and look that you get with it, not because it gives you a lot of resolution or is inexpensive. So with that being said, what do I bring to the table with Simple Film Lab that is better than the other film labs out there? If you look at what other labs charge and what I will be charging, I’m certainly not less expensive from a purely monetary stance. I also won’t really be delivering the highest of resolutions either.
In order to really take advantage of what film has to offer, one must beef up the entire imaging chain. Almost every lab I’ve looked at and tried out typically scans with a Noritsu or Frontier scanner and delivers jpegs. You hear a lot about how a Frontier scanner delivers color like this or that, and how some film scanner is beloved by x type of photographers. OK. I mean no disrespect to other film labs, however, having a process where you deliver jpegs of film scans to customers is not doing the customers or film any favors.
It’s all about the color. While I do have a dedicated 35mm film scanner that is very recent and can scan 35mm film at really high resolutions, and I do have a very high resolution flatbed scanner that can scan 120 film at crazy high resolutions, I also have a way to digitize film using a very controlled light source, with very good optics, and a reasonably high resolution imaging sensor. The setup I prefer could be called a DSLR film scanner, but it’s actually more complicated that than. Photographic film by definition is very high dynamic range, with a lot of color. When you digitize film, what you are essentially doing is taking a picture of the film emulsion. You can take the picture of the film emulsion with a dedicated film scanner, a flatbed scanner, or with a digital or film camera. It’s what you do with the digitized image after that that makes all the difference.
Typically, the color negative is inverted by either the film scanner itself, the scanning software, or manually in Adobe Photoshop. While one can get good results with that, I’ve brought my skills as a computer programmer to bear and developed code that significantly beefs up the entire imaging and color chain after digitizing to full 64 bit floating point in linear color space. What does that even mean? That means the process to turn the color negative into a color positive along with the following color modifications to get a usable image happen in very high resolution 64 bit floating point linear color space. I’d love to be able to deliver 64 bit floating point linear light images to customers, however, that is not something that any software customers would have access to really supports, so the next best thing is 16 bits per sample (or 48 bit) TIFF files in the ProPhoto color space.
Because the high precision digitization workflow requires a calibrated film profile for every film we support digitizing, Simple Film Lab will not accept any film to be processed and digitized. While we can pretty much process any C-41 film (we use standard Kodak C-41 Chemicals), the service we offer is coupled together, so when you send film in, it is to be processed and digitized. The cost therefore, might seem high per roll, but when you factor in that you’re getting processing and a very high quality film scan, and 48 bit TIFF files in the ProPhoto color space as the delivery with enough resolution to make 16×24 prints, it’s worth it, at least we think there’s a market for it.
The plan is to start accepting processing orders for Kodak Ektar 100 film in 35mm, and 120 roll the first quarter of 2017, then add Kodak Portra 160, Portra 400, and Portra 800 in 120 roll film in the second or third quarter and add 35mm Portra 160, 400, and 800 later in the year if there is demand for it along with the 4×5 sheet versions at some point in the second half of 2017. We’re also going to keep things simple in terms of what resolutions we offer: There will really only be two options, standard resolution, and custom scan. Standard 2:3 resolution will be 7200×4800 pixels with other aspect ratios having 4800 pixels on the short side, and custom scan is exactly what it sounds like, a custom scan with an output to your specifications. The standard processing/scan target price will be $20 per roll not including shipping, and custom scan will be priced according to how much time/effort Simple Film Lab has to put in. At the end of the day, it all boils down to image processing time and who is spending that time.
All film will always be processed with fresh chemicals, and the target turnaround time will be 5-7 business days. As things pick up, we’ll be adding additional films to the catalog that we support. There are a couple of emulsions that are pretty popular with wedding photographers (Fuji 400H, looking right at you), and we do plan to support it, however, that comes with some challenges, as most labs that cater to processing/scanning that film also use Fuji Frontier scanners and already deliver pretty good results, so in that instance, the biggest issue is going to be getting customers to move away from those labs and start using Simple Film Lab instead.
Additionally, you can expect very good customer service. As my own customer, I have very high standards, and I’m a firm believer in providing very high standards to my customers. Because Simple Film Lab is a small operation, as a customer, you’ll be dealing directly with me, and it will be my eyeballs that look at every single one of your images before they’re sent to you.
In short, Simple Film Lab is the Film Lab that I would want as a customer. Keep watching this space, good things are on the way.
It depends. Don’t you hate those types of answers? Unfortunately, there is no simple answer, because you have input resolution and output resolutions, and sub-types inside those two high levels.
Cut to the chase
If all you do is share your pictures online or make 4×6, 5×7, or 8×10 prints, then pretty much any camera made within the last 5-10 years will give you more image resolution than you need and you don’t have anything to worry about and shouldn’t even need to think about it. Carry on snapping away. If you want to learn about why, then read on.
Keeping it simple
In the interest of distilling information down to useful bite size chunks, let’s start with output resolution.
The vast majority of us don’t actually print our pictures, however, print resolutions are typically a lot higher than screen resolutions for a given amount of surface area, so we’ll talk in terms of print resolution.
For this discussion, we’ll equate dots and pixels as the same thing, so terms like dpi (dots per inch) and ppi (pixels per inch) mean the same thing. Likewise, we’ll say a pixel and a dot are a single discrete full color entity that we can see. It may be made up of one or more smaller sub-parts, like a red element, a green element, and a blue element in a computer display, or multiple ink droplets on a piece of paper, but for this discussion it’s one discrete visible full color unit.
It turns out that the human eye actually does have a finite amount resolution in terms of how we would describe it if it where a digital sensor. If we where to take resolution and print it onto a piece of paper, and look at it up close and personal, the magic number where we stop being able to discern fine detail sits right around 300 dots per inch, or 150 line pairs per inch on the paper (or screen), meaning that if we take 150 black lines that are 1 pixel wide and 300 pixels tall, 150 white lines that are also 1 pixel wide and 300 pixels tall, and alternate between them, when printed on the paper so that all 300 lines fit into onto a 1×1 inch square, it would actually look like a grey square to most of us instead of look like alternating black and white lines. This is why most magazines print at 300 dpi, and your iPhone’s Retina display is also about 300 dpi. Spatially speaking, our vision starts to poop out and adding more image resolution than that generally does not make the picture have more detail or look sharper to our eyes.
So, with this 300 dpi number, it’s pretty easy to do some simple math and extrapolate out how much output resolution we need for the various ways we look at our pictures: for a 4×6 inch print, 300 dpi times 4 inches is 1200 pixels on the short edge, and 300 dpi times 6 inches is 1800 pixels on the long edge, or an image that is 1800×1200 pixels. That’s a measly 2.1 megapixels.
The same math for an 8×12 (or 8×10) print comes out to 3600×2400 pixels, or a very modest 8.6 megapixels. A 16×24 (or 16×20) print is 7200×4800 pixels, or 34.5 megapixels. Now we’re starting to get into some serious resolution.
The 16×24 print size not withstanding, if we take display sizes into account, we soon discover some correlations. The average computer display or HD TV can comfortably display the 1800×1200 resolution image with little to no scaling and look quite good. A newer 4K display can display the 3600×2400 resolution image with little to no scaling and look quite good. It should be noted that the aspect ratios between print and screen aren’t the same, so if you never intend to print, you can crop your images to 16:9 aspect to match your display and simply size your output to either HD (1920×1080) or UHD (3840×2160) resolution and call it a day.
What I do
I’ve standardized my “house format” if you will to that of a 16×24 (or 16×20) print size, meaning all of my keeper images, regardless of their input resolution get scaled up or down to 7200×4800 pixels. That’s my working/master resolution (as of 2016).
For paying clients, for standard uses, they get 3600×2400 (or 3840×2160) pixels as the deliverable (scaled down from the 7200×4800 master) unless they’re going to print larger than 8×10, and in that case, the conversation shifts to commissioning me to get the image and do the print for them, since I’ll typically want to capture more resolution than normal and work with a print service that specializes in larger print sizes, and that involves renting gear that is appropriate for what the client wants in terms of output size. Depending on the size or the aspect ratio of the print, I may break from the 7200×4800 and go larger, but typically, that is starting to get into higher end output, and very low volume. Keep in mind, a 16×20 or 16×24 print while not really huge, is not small. It’s 4 times the size of an 8×10. It’s big enough, you have a frame built for it and hang it on a wall.
This is where it starts to get a little techie and can be a bit confusing if you’re not a tech head, so lets take it a little at a time. I saved input resolution for last since what resolution you want/need to output tends to drive what resolution you need to acquire to use as your input.
When it comes to input resolution, it can be greatly affected by lots of different factors, so now is a great time to talk about the concept of “effective resolution”. For example, when you take a picture, how much visible resolution ends up in your image that you can see as fine detail is affected by things such as mirror slap (if you’re shooting an SLR or DSLR), shutter shock, how long your shutter is open (which affects how much movement happened, which shows up as blurring), your hand shaking the camera when you press the shutter button (which shows up as blurring), how deep your depth of field is (which affects how much of your image is in really sharp focus), how much image noise the sensor is introducing into the image, how much diffraction is happening in your lens (depending on your lens f-stop), and how much spatial resolution the lens you’re using is capable of actually putting onto your camera sensor. These things that I just listed are all things that will affect how much resolution you’re effectively putting into your image regardless of whether your shooting film or digital, full frame, medium format, large format, APS-C, Micro 4/3, or smaller. We haven’t even started talking about the raw image sensor resolution yet.
In short, that nice crispy 24 megapixel camera you just picked up? Unless you’re using a really high resolution lens (which is incredibly expensive), and practicing some pretty rigorous shooting process to keep your camera movement and vibration under control, you’re not getting anywhere near 24 megapixels of resolution when you take a picture. Even then, your camera sensor is hiding a dirty little secret.
You see, a 24 megapixel camera outputs an image that is 6000×4000 pixels. It does not actually have 6000 red pixels, 6000 green pixels, and 6000 blue pixels across. The same goes vertically. Nope. What it does have is 6000×4000 light detecting sensors, that then has a color filter array placed over it (usually in a bayer pattern). The color filter array takes that 6000×4000 pixels and divides it up between red, green, and blue. Since human vision is most sensitive to green, a full half of the sensor resolution gets filtered to green, and the remaining half goes to red and blue, which each get a quarter of the resolution. To get to an image that you can actually see, this then goes through a demosaicing process into an image that is 6000 red, green, and blue pixels by 4000 red, green, and blue pixels.
What this means is that for a 24 megapixel image captured with a 24 megapixel camera, you are effectively seeing 12 megapixels of green, and 6 megapixels each of red and blue. Even though, spatially speaking, you have 24 million light sensing elements on your sensor, you are not getting 24 megapixels of full color information. It’s actually more like 6 to 9 megapixels of full color information, which interestingly enough is right there in the ball park of making a really nice 8×10 print.
This is why medium format cameras have been 40+ megapixels for a while now. It’s less about getting the raw spatial resolution, and more about effectively getting more full color spatial resolution. This is why an 8×10 print from a 50 megapixel Canon 5Ds looks so much better than the same picture taken with a ten year old Canon Digital Rebel XTi that’s 10 megapixels. It’s not about the raw spatial resolution, since we can’t really see more than 300 dpi on the page anyway, it’s about getting 300 dpi of full color information.
What about large prints?
But wait a minute, photographers have been making large prints with cameras that don’t have anywhere near that resolution for a while now and they look great. What gives? Well, as it turns out, the larger you print, the less spatial resolution you actually need. It sounds counter intuitive, but once you start getting into 16×20 or larger print sizes, you stop looking at it up close and personal like you would a smaller print, and instead stand back to take it in. The further away from the print you stand, the less dpi your eyes can actually resolve on the print. This is why a 65″ 1080p HDTV which is only 2 megapixels still looks good. You sit further away from it than you would a smaller TV. Combine that with the fact that our brains are very good at filling in missing information, all the photographer has to do is make sure that the image is scaled up in a way that pixelation isn’t obvious if inspected up close. Our brains will do the rest.
With that being said, for larger print sizes more camera resolution will generally result in a better looking output image simply because we’re then putting more raw spatial and full color resolution into the image and have a lot more real estate that we can fill up with that resolution, until we effectively have more than 300 dpi that we’re putting on the print surface.
So how much image resolution do we really need? For the average person sharing online and making 8×10 or smaller prints, a camera that is at least 6 or 7 megapixels will provide totally usable images. The larger you print, the more resolution you’ll want. Digital cameras have just recently gotten to the point where we can actually capture and put all the full color resolution that we can see into an 8×10 print, which makes for super fabulous prints, so this is a great time to be taking pictures.
What is image sharpening? We’ve all heard about it and have undoubtedly heard about various image sharpening tools like unsharp mask or smart sharpen, but I’ve found that very few of us actually understand what image sharpening is.
So what is image sharpening?
No matter what image sharpening tool or algorithm you use, they all have the same end result, which is to increase the contrast of the lines and edges of objects in the image. That’s all image sharpening is. The primary differences between the various algorithms or methods of sharpening isn’t the end result, but rather how the lines and edges of objects in the image are detected. Likewise the various sliders or controls you get for each method of sharpening are to control the amount of sharpening and to fine tune the underlying line and edge detection for that sharpening method.
So there you have it. Image sharpening explained in 4 simple sentences. It’s not that difficult. The human visual system is extremely good at detecting lines and edges, so when we sharpen an image, all we’re doing is making what we’re visually sensitive to more pronounced. It’s a very effective visual perceptual trick that we’re playing on our brains when we sharpen an image.
I’ve noticed some image sharpening trends the last 5-6 years that really bother me and make me think that all these people on the internet that are dispensing photography information and advice and are supposed to be photography experts don’t really know what they’re talking about and doing. I can’t help myself. I have to say something about it.
The image sharpening aesthetic
This is a huge pet peeve of mine. All too often, people think that a sharp picture has lots of detail. As a result, they sharpen their images way, way, way too much. The Internet is riddled with posts on how and when to sharpen. You have input sharpening, creative sharpening, and output sharpening. You have tons of sharpening algorithms, plugins, and tools to increase the clarity of your images. There are companies out there whose entire business model is literally based on selling you something that will help you sharpen your images. On the camera hardware front, as of late, it seems that if a newly released camera doesn’t output a ridiculously over-sharpened image the Internet declares that camera as a piece of garbage. Ugh.
On top of that, the internet is flooded with images that are painfully over-sharpened (usually as a result of said company that sells image sharpening tools), all in the name of having a nice sharp image that is “crispy”. You know what else makes a nice sharp image? A nice moderately high resolution image that has a depth of field that is large enough so that the whole subject of the image is entirely in focus, assuming the person taking the picture actually nailed the focus.
Image focus and resolution to the rescue
I can’t believe how many people get a super fast prime lens, and then proceed to shoot with it wide open, then sharpen the ever living day lights out of the resulting images in an effort to try to get the subject sharp. It’s almost as if they don’t realize that when you’re shooting an 85mm+ lens on a full frame camera at f/1.4 or f/1.2, the depth of field is so small that the only thing in the image that’s going to be in nice sharp focus is one eye, or the tip of the subject’s nose, or one of the subject’s cheek bones, or their lips, or whatever the camera actually happened to lock focus on. Having a really small depth of field definitely has its uses, but if you want a nice sharp image, try stopping your aperture down to something like f/8. You’d be amazed at how much more resolution and fine detail is there and how much sharper your photos are as a result once they’re scaled to whatever the final image output resolution is. Again, assuming that you actually nailed the focus.
Have you ever seen a picture that actually had as much resolution and fine detail as what could be natively represented by the medium displaying the image? Probably not, but you’ll know it when you see it.
I’ll give an example: have you ever watched an HD movie on your iPhone in HD? You should try it some time. It looks incredible. The reason why isn’t because of image sharpening, but rather because you’re displaying the maximum amount of resolution and fine detail that your iPhone screen can natively represent.
Another example: ever seen Christopher Nolan’s “The Dark Knight” movie on Blu-Ray? He shot parts of the movie on very high resolution IMAX cameras and cut those scenes in with the rest of the film, which was shot on standard super-35 film. Even at Blu-Ray resolution (which is a whole whopping 2 megapixels image size), the difference in image resolution and fine detail between super-35 film and IMAX’s 65mm film is stunning. The IMAX sequences just look a lot sharper, not because of image sharpening, but because they contain as much resolution and fine detail as what can be packed into a 1920×1080 pixel image size, which results in a picture that looks very sharp with very little actual image sharpening applied or needed.
Huh. We just came full circle back to image sharpening. Imagine that.
OK. Sooo… When do you do image sharpening?
Ideally, you sharpen at the very end when your image is at it’s final output size. If you have a good image with good resolution and fine detail, you’ll discover that sharpening is often like what salt and pepper is to a great meal. A little bit goes a long way, and if used just right, it greatly enhances things, however, more is rarely better.
There are other places in your workflow where you can sharpen, like input sharpening, and creative sharpening, and those instances do have their uses, however, they tend to be really over-used and abused, so for the sake of simplicity, we’ll leave them off the discussion table for now and maybe visit them in separate posts.
You may have noticed that I brought up image resolution and fine detail a number of times in this post while talking about image sharpening. How much resolution you actually need is a subject for a different post, so we’ll get into that later, though suffice it to say, you don’t need nearly as much resolution as you think you do, the trick is actually acquiring that resolution in a way that makes for sharp images that will need very little image sharpening to look good.
Till next time.
Usually, we want to reduce the size of images, not enlarge them. However, there are times where we have a reasonably small source image that we need to make bigger for one reason or another.
I actually routinely enlarge images because I have a variety of cameras that I create images with and they’re all different resolutions and bit depths, so I normalize the keepers to one larger master resolution and bit depth.
It should be noted that my method described below is not a miracle worker. You won’t be making 10x or 20x enlargements that look good, but you can easily make 3-4x enlargements that will look totally passable.
The best part? You don’t have to pay for any software beyond just having Adobe Photoshop, so no plug-ins or extra software to buy (Perfect Resize, I’m looking right at you).
The how is actually pretty simple, and before I explain the specifics of what I do in my version of Photoshop, I’ll explain the generic version so that you can convert it to your software, which may not be the same thing as what I’m running. Ok? Let’s get started.
Before doing this, start with your source image and save it as a 16 bit tiff file at its original resolution.
- Figure out what your resulting resolution is to be and multiply the longest edge of the image by four. For example, I generally normalize to 7200×4800 pixels, so I’d end up with 28,800×19,200 pixels. Resize your source image to that new, really huge resolution.
Add some noise to the newly resized really huge image. The better way to do this is with monochromatic noise so it looks more organic, but if your software doesn’t do that, any noise is better than no noise. I generally add between 5% and 10%. It should be Gaussian noise.
Add a Gaussian Blur to the really huge image. It’s radius should be 2 pixels.
Resize the really huge image down to your resulting image size. Save it as a 16 bit tiff file and do the rest of whatever post processing you’re going to do.
That’s it! This method also works like a charm for upgrading 8 bit images to nice smooth 16 bit images that you can really push around in Lightroom/Photoshop without any ugly banding or posterization popping out at you.
Why it works
What?!?! You’re adding noise and blur to the image! Doesn’t that destroy image quality and detail and make the image blurry?
If we were to do that to an image at its native size, then yes, all we would be doing is adding noise and making it blurry. The thing to keep in mind is that we’re doing this to an image that is 8 times the resolution of our final output resolution.
When we scale the image back down, that noise and blur has the effect of filling in and smoothing out what would otherwise be pixelation in the image.
The key to this working so well is the bit depth and ratios that we use. The small amount of noise that gets added when we’re at 8x resolution gets reduced down and visually provides a smoothing effect to the new image size without making it look soft or blurry.
Likewise, the Gaussian blur we added was two pixels to an image that is 8x the resolution of what it ultimately will be, meaning when we scale the image back down, a 16×16 block of pixels gets turned into a 4×4 block of pixels, which means that we’re scaling down more than we blurred. When we scale down more than we blur, the blur that was applied starts to do interesting things for us. It visually provides the effect of filling in and smoothing out what would otherwise be pixelation in the image.
Combined with the added noise in step 2, it’s a very dramatic one-two punch to an image that would otherwise look pixelated and awful.
How I do it
We all use different software. I happen to use the latest version Adobe Lightroom CC and Adobe Photoshop CC. This isn’t a tutorial for how to use Lightroom or Photoshop, it’s just a basic walk-through of what I do. You can and should modify it to suit your needs.
All of my images start off in Lightroom at their original resolution and bit depth. I have a Lightroom catalog that I use for staging these images for processing that all of my keepers make their first stop in. In this catalog, I add all my metadata to each image (it makes tracking it later easier) and the only image adjustment I make here is to remove image noise that was introduced by the camera in the form of color and/or luminance noise. I’m very conservative with this and look at the image at a 1:1 or 2:1 ratio in the area where noise is most prominent in the image. I only do just barely enough noise removal to tone down the noise since the more noise removal you do, the more the image fine detail gets muddled. This is done on an image by image basis and how much noise reduction is applied varies greatly between what camera took the image and at what ISO the image was shot at.
From there, I export the image out as a 16 bit tiff file at the “super-sized” resolution (28,800×19,200 pixel or there abouts depending on the image aspect ratio).
I then open that tiff file in Photoshop and add a layer over the background layer that is the image. I change the layer’s blending mode to “overlay” and fill it with 50% gray. From there I convert the layer to a smart object.
With the smart object selected, I go to the ‘Filter’ menu, and select ‘Noise’->’Add Noise’. In the dialog box that pops up, select ‘Gaussian’, and check the ‘Monochromatic’ check-box. Change the amount to a value between 5% and 10%. I’ve found that less than 5% tends to be not enough, and more than 10% is too much. I generally set it to 7.5% as a start then tweak it up or down as needed for best results. You should experiment around for what values work best for you based on what resolution you are working at.
From there, I go to the ‘Filter’ menu again and select ‘Blur’->’Gaussian Blur’. In the pop up dialog box I select a radius of 2 pixels.
From there, tweak the amount of noise up or down for best results (you can do this because it’s a smart object).
When you’re happy with the image, go to the ‘Layer’ menu and select ‘Flatten Image’.
Now resize the image to your final enlarged size (in my case 7200×4800 pixels) and save it as a 16 bit tiff that you’ll pull into your real Lightroom catalog.
Import the new enlarged tiff file into your Lightroom catalog that you use for managing your media, convert it to a DNG file, then finish the rest of your post processing on the file.
Isn’t this a lot of work?
Yes and no. We only do this on our keeper images, which for most photographers is only a fraction of the images that they take. That and the only thing we’re doing that is any different is the Photoshop bit. You should still be doing noise reduction, adding meta-data, and post processing. The only real difference is a quick middle step in Photoshop that literally only takes a couple minutes per image, if that.
Everybody should do what works for them, and what works for me isn’t necessarily for everybody, however, the process I outlined above allows me to shoot everything from an iPhone jpeg, to a DSLR raw or jpeg image, to frame grabs from video as jpegs, and end up with a reasonably sized standard image resolution that is actually very usable.
As proof in the pudding, most all of the images I’ve recently shot digitally have had this treatment, and if I hadn’t told you that those images got this treatment, while looking at them, you’d be none the wiser. It makes differentiating between jpeg and raw, and lower iPhone/video resolution and DSLR resolution extremely difficult, which is the point.
This works best with an image that already has reasonably good resolution content to begin with. This does not magically add detail where there is none, nor does it really add resolution or rescue images that already look terrible.
What it does do is add very high frequency or broadband information to an image in a way that our brains find very pleasant, which allows our brains to do the heavy lifting of ‘seeing’ what detail is there in the enlarged image without seeing the unpleasant visual effects of scaling up that image. In short, we’re playing a very effective visual trick on our brains in very much the same way that adding dithering to digital audio allows us to hear further down into the sound.
Our brains are very good at filtering out high frequency broadband noise to get to the detail in the noise, as long as that noise isn’t overwhelming to the point of being distracting. The trick is riding that balance between helping the perceived image quality and hurting it.
Till next time.
I can’t believe it’s been almost five years since I last posted here.
Time sure flies.
Well, some changes are afoot, so you’ll start seeing some stuff move around.
I’m going to get rid of the life stuff, keep the humor stuff, and add photo and video stuff.
There will definitely be some more activity for sure. I’ve been pretty active on the internet, but it’s been diffuse all over the place via a bunch of online social sites. Those have served me well, but they’ve really just been for short content.
I’ve got some longer form content that I want to produce, and the easiest way to do it is here.
A man went into a pharmacy and asked to talk to a male pharmacist. The woman he was talking to said that she was the pharmacist and that she and her sister owned the store, so there were no males employed there. She asked if there was something which she could help the gentleman with.
The man said that it was something that he would be much more comfortable discussing with a male pharmacist.
The female pharmacist assured him that she was completely professional and whatever it was that he needed to discuss, he could be confident that she would treat him with the highest level of professionalism.
The man agreed and began by saying, “This is tough for me to discuss, but I have a permanent erection. It causes me a lot of problems and severe embarrassment. So I was wondering what you could give me for it?”
The pharmacist said, “Just a minute, I’ll go talk to my sister.”
When she returned, she said, “We discussed it at length and the absolute best we can do is, 1/3 ownership in the store, a company car, and $3000 a month living expenses.
The manager of a large office noticed a new man one day and told him to come into his office. “What is your name?” was the first thing the manager asked the new guy.
“John,” the new guy replied.
The manager scowled, “Look, I don’t know what kind of a namby-pamby place you worked at before, but I don’t call anyone by their first name. It breeds familiarity and that leads to a breakdown in authority. I refer to my employees by their last name only – Smith, Jones, Baker – that’s all. I am to be referred to only as Mr. Robertson. Now that we got that straight, what is your last name?”
The new guy sighed and said, “Darling. My name is John Darling.”
Have you ever told a white lie? You are going to love this, especially all of the ladies who bake for church events:
Alice Grayson was to bake a cake for the Baptist Church Ladies’ Group in Tuscaloosa, but forgot to do it until the last minute. She remembered it the morning of the bake sale and after rummaging through cabinets, found an angel food cake mix & quickly made it while drying her hair, dressing, and helping her son pack up for Scout camp.
When she took the cake from the oven, the center had dropped flat and the cake was horribly disfigured and she exclaimed, “Oh dear, there is not time to bake another cake!” This cake was important to Alice because she did so want to fit in at her new church, and in her new community of friends. So, being inventive, she looked around the house for something to build up the center of The cake. She found it in the bathroom – a roll of toilet paper. She plunked it in and then covered it with icing. Not only did the finished product look beautiful, it looked perfect.
Before she left the house to drop the cake by the church and head for work, Alice woke her daughter and gave her some money and specific instructions to be at the bake sale the moment it opened at 9:30 and to buy the cake and bring it home. When the daughter arrived at the sale, she found the attractive, perfect cake had already been sold. Amanda grabbed her cell phone & called her mom. Alice was horrified-she was beside herself! Everyone would know! What would they think? She would be ostracized, talked about, ridiculed! All night, Alice lay awake in bed thinking about people pointing fingers at her and talking about her behind her back.
The next day, Alice promised herself she would try not to think about the cake and would attend the fancy luncheon/bridal shower at the home of a fellow church member and try to have a good time. She did not really want to attend because the hostess was a snob who more than once had looked down her nose at the fact that Alice was a single parent and not from the founding families of Tuscaloosa, but having already RSVP’d, she couldn’t think of a believable excuse to stay home. The meal was elegant, the company was definitely upper crust old south and to Alice’s horror, the cake in question was presented for dessert! Alice felt the blood drain from her body when she saw the cake! She started out of her chair to tell the hostess all about it, but before she could get to her feet, the Mayor’s wife said, “what a beautiful cake!” Alice, still stunned, sat back in her chair when she heard the hostess (who was a prominent church member) say, “Thank you, I baked it myself.”
Alice smiled and thought to herself, “God is good.”
On little Larry’s first day of first grade, he raised his hand as soon as the teacher came into the room and said, “I don’t belong here, I should be in third grade!”
The teacher looked at little Larry’s records and told him to please take his seat.
Not five minutes passed when little Larry stood up again and said, “I don’t belong here, I should be in the third grade!”
Larry did this a few more times before the principal came along and the teacher explained Larry’s problem. The principal and the first grade teacher told little Larry that if he could answer some questions that they could decide in which grade he belonged. Well, they soon discovered that Larry knew all the state capitals and country capitals that the principal could think of.
The teacher suggested they try some biology questions … “What does a cow have 4 of but a woman has only 2?” asked the teacher.
“Legs!” Larry immediately replied. “What does a man have in his pants that a woman doesn’t?” asked the teacher.
“Pockets!” said Larry.
The teacher looked at the principal, who said, “Maybe he should be in third grade, I missed those last two questions!”