[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [openrisc] Cache Line Fill



Damjan,

Thank you very much for your dedicated supports. I will try it and let you
know.
Let me clarify one thing that you mentioned in the previous email.

In order to use the quick_mem for i cache, we need to do a mem copy from rom
to qmem using the data bus that is also connected to the qmem along with the
instruction bus. This is just like a back door to initialize the qmem.

Do you need to enable the instruction cache after the mem copy routine
executed?
Is the instruction cache hit is always forced to "1"?
What is the memory mapping of qmem in this configuration?
And what should be the qme initialization sequence?

Thanks again

Michael Phan

-----Original Message-----
From: owner-openrisc@opencores.org
[mailto:owner-openrisc@opencores.org]On Behalf Of Damjan Lampret
Sent: 08 July 2003 17:18
To: openrisc@opencores.org; mphan@nimbuswireless.com
Subject: Re: [openrisc] Cache Line Fill


Michael,

you can check out or1k/or1200 (or or1k/orp/ ...) with the following branch
tag: branch_qmem

Make sure you have among other or1200 rtl files also file or1200_qmem_top.v,
this is where the embedded memory sits. (I had some problems creating branch
so please check that you have this file)

Then go to or1200_defines.v and enable OR1200_QMEM_IMPLEMENTED.

regards,
Damjan

----- Original Message -----
From: <mphan01@earthlink.net>
To: <mphan@nimbuswireless.com>; <openrisc@opencores.org>
Sent: Thursday, July 03, 2003 11:17 AM
Subject: RE: [openrisc] Cache Line Fill


> Hi Damjan,
>
> Please let me know if you already checked in the new data base for "I
> RAM replacement for I cache".
> I hope you did have a chance to do the final testing.
>
> Thanks
>
> Michael Phan
>
>
>
> ----- Original Message -----
> From: "Michael Phan" <mphan@n... >
> To: <openrisc@o... >
> Date: Thu, 26 Jun 2003 14:41:17 -0700
> Subject: RE: [openrisc] Cache Line Fill
>
> >
> >
> > Sound good to me.
> >
> > Thanks
> >
> > Michael Phan
> >
> >
> > -----Original Message-----
> > From: owner-openrisc@o...
> > [mailto:owner-openrisc@o... ]On Behalf Of Damjan Lampret
> > Sent: Thursday, June 26, 2003 11:37 PM
> > To: Openrisc@O...
> > Subject: Re: [openrisc] Cache Line Fill
> >
> >
> > Hi Michael,
> >
> > yes I'm back. Thanks.
> >
> > I'm doing some final testing so I'm sure I don't check-in some
> > garbage. I'll
> > check-in tomorrow evening or saturday (depending when I'm done
> with
> > testing).
> >
> > regards,
> > Damjan
> >
> > ----- Original Message -----
> > From: <mphan@n... >
> > To: <lampret@o... >; <openrisc@o... >
> > Sent: Thursday, June 26, 2003 11:51 AM
> > Subject: Re: [openrisc] Cache Line Fill
> >
> >
> > > Hi Damjan,
> > >
> > > I hope you are back from travelling and had a nice one.
> > > Did you check in the changes for "I RAM replacement for I
> > cache" yet?
> > > I am waiting to test it out.
> > >
> > > Thanks
> > > Michael Phan
> > >
> > >
> > > ----- Original Message -----
> > > From: "Damjan Lampret" <lampret@o... >
> > > To: <mphan@n... >,
> > > <openrisc@o... >
> > > Date: Sat, 7 Jun 2003 00:47:12 -0700
> > > Subject: Re: [openrisc] Cache Line Fill
> > >
> > > >
> > > >
> > > > Michael,
> > > >
> > > > I assume you mean the I RAM replacement for I cache? Not
> > yet, for
> > > > the moment
> > > > you can use RAM connected to iwb and I'll commit the
> > changes to the
> > > > cvs next
> > > > weekend (I'm travelling this weekend until next weekend).
> > > >
> > > > regards,
> > > > Damjan
> > > >
> > > > ----- Original Message -----
> > > > From: <mphan@n... >
> > > > To: <mphan@n... >;
> > > > <openrisc@o... >
> > > > Sent: Friday, June 06, 2003 11:11 AM
> > > > Subject: Re: [openrisc] Cache Line Fill
> > > >
> > > >
> > > > >
> > > > > HI Damjan,
> > > > >
> > > > > Just want to touch base with you on this project,
> > "instruction
> > > > execution
> > > > > with 0 wait state", did yo have a chane to put the
> > changes
> > > > into the
> > > > > CVS so we can download and try them out.
> > > > >
> > > > > Thanks in advance
> > > > > Michael Phan
> > > > >
> > > > >
> > > > > ----- Original Message -----
> > > > > From: mphan@n...
> > > > > To: lampret@o... , openrisc@o...
> > > > > Date: Tue, 6 May 2003 17:57:54 -0100
> > > > > Subject: Re: [openrisc] Cache Line Fill
> > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > Hi Damjan,
> > > > > >
> > > > > > Your feedbacks are so precious and helpful to
> > our
> > > > project. Exactly,
> > > > > > we
> > > > > > don't need to use MMU and want to replace the
> > cache with
> > > > fixed
> > > > > > memory for instruction execution with 0 wait
> > state. So
> > > > please put
> > > > > > your
> > > > > > changes in the CVS at your convenience so we
> > can try it
> > > > out and
> > > > > > measure the performance improvement. Our
> > project needs
> > > > about 512
> > > > > KB
> > > > > > for the cache.
> > > > > >
> > > > > > Thousand thanks
> > > > > > Michael Phan
> > > > > >
> > > > > >
> > > > > >
> > > > > > ----- Original Message -----
> > > > > > From: "Damjan Lampret" <lampret@o... >
> > > > > > To: <openrisc@o... >
> > > > > > Date: Mon, 5 May 2003 21:21:38 -0700
> > > > > > Subject: Re: [openrisc] Cache Line Fill
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > ----- Original Message -----
> > > > > > > From: <mphan@n... >
> > > > > > > To: <openrisc@o... >
> > > > > > > Sent: Thursday, May 01, 2003 3:36 PM
> > > > > > > Subject: [openrisc] Cache Line Fill
> > > > > > >
> > > > > > >
> > > > > > > > Hi Damjan.
> > > > > > > >
> > > > > > > > In the current design of orp_soc,
> > with
> > > > > > or1200_registered_ouput
> > > > > > > > supported, a cache line fill takes 8
> > clocks
> > > > (2-2-2-2) to
> > > > > > fetch
> > > > > > > 4 DWORDs
> > > > > > > > from the SRAM/FLASH. And a single
> > DWORD fetch
> > > > takes 3
> > > > > > clocks
> > > > > > > > (including one idle cycle of wb_cyc_o
> > > > deasserted).
> > > > > > > >
> > > > > > > > If we have a very fast internal SRAM,
> > is it
> > > > possible to
> > > > > > do a
> > > > > > > cache line
> > > > > > > fill
> > > > > > > > with 4/5 clocks (1/2-1-1-1) by
> > changing the
> > > > wb_stb logic
> > > > > > in
> > > > > > > the
> > > > > > > > or1200_wb_biu.v and do a single DWORD
> > fetch
> > > > with 2
> > > > > > clocks.
> > > > > > >
> > > > > > > OR1200 is it is has been used in SoC
> > projects much
> > > > more
> > > > > > complxed
> > > > > > > than
> > > > > > > orp_soc. In all these projects the memory
> > subsystem
> > > > takes more
> > > > > > than
> > > > > > > 1/2-1-1-1. So the current 2-2-2-2 was fast
> > enough
> > > > for all
> > > > > > SoCs. If
> > > > > > > you
> > > > > > > however have faster memory subsystem than
> > > > modification of
> > > > > > > or1200_wb_biu and
> > > > > > > possibly IC/DC state machines will be
> > needed.
> > > > > > >
> > > > > > > >
> > > > > > > > My next question is can we increase
> > to cache
> > > > size to 512
> > > > > > kB to
> > > > > > > reside
> > > > > > > > the whole firmware and execute
> > instructions
> > > > from it with
> > > > > > 0
> > > > > > > wait state.
> > > > > > > >
> > > > > > >
> > > > > > > If you want to use MMUs, then no. This is
> > because
> > > > MMU's page
> > > > > > > translation is
> > > > > > > done at the same time as cache access -
> > virtual page
> > > > number is
> > > > > > > translated at
> > > > > > > the same time as cache hit is determined.
> > Since page
> > > > size is
> > > > > > 8KB,
> > > > > > > largest
> > > > > > > direct mapped cache can only be 8KB,
> > unless you use
> > > > several
> > > > > > ways,
> > > > > > > or unless
> > > > > > > cache access takes an additional clock
> > cycle (maybe
> > > > acceptable
> > > > > > for
> > > > > > > data
> > > > > > > accesses ?).
> > > > > > >
> > > > > > > Anyway if you don't need MMU, then your
> > caches sizes
> > > > are not
> > > > > > > limited. To
> > > > > > > increase cache size just add new IC/DC
> > configuration
> > > > (search
> > > > > > for
> > > > > > > "configuration" in or1200_defines.v and
> > when you
> > > > find IC and
> > > > > > DC
> > > > > > > configurations, just add a new size and
> > then enable
> > > > new
> > > > > > > configuration).
> > > > > > > Right now there are configurations for 4KB
> > and 8KB
> > > > caches.
> > > > > > >
> > > > > > > I'm working on one project where similar
> > to your
> > > > case all code
> > > > > > > needs to be
> > > > > > > accessible in 0 wait states. What I plan
> > to do is to
> > > > replaces
> > > > > > > caches with
> > > > > > > fixed memories - basically removing TAG
> > RAMs and
> > > > making sure
> > > > > > that
> > > > > > > the "hit"
> > > > > > > always happens when accessing certain
> > address range
> > > > and never
> > > > > > > happens when
> > > > > > > accesssing outside of that range. This
> > will
> > > > effectively
> > > > > > "change
> > > > > > > caches into
> > > > > > > fixed RAMs much like DSP RAMs or similar).
> > > > > > > If you want these changes, I can put them
> > into the
> > > > cvs with
> > > > > > > appropriate
> > > > > > > defines. But it will take a few days.
> > > > > > >
> > > > > > > regards,
> > > > > > > Damjan
> > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > > Michael Phan
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > >
> > >
> >
> --
> To unsubscribe from openrisc mailing list please visit
http://www.opencores.org/mailinglists.shtml
>

--
To unsubscribe from openrisc mailing list please visit
http://www.opencores.org/mailinglists.shtml

--
To unsubscribe from openrisc mailing list please visit http://www.opencores.org/mailinglists.shtml